00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1996 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3262 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.096 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.142 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.212 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.235 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.235 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.697 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.709 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.721 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.721 > git config core.sparsecheckout # timeout=10 00:00:04.732 > git read-tree -mu HEAD # timeout=10 00:00:04.748 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.766 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.766 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.873 [Pipeline] Start of Pipeline 00:00:04.889 [Pipeline] library 00:00:04.891 Loading library shm_lib@master 00:00:04.891 Library shm_lib@master is cached. Copying from home. 00:00:04.907 [Pipeline] node 00:00:19.912 Still waiting to schedule task 00:00:19.912 ‘CYP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘CYP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP19’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘GP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.912 ‘SM10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘VM-host-WFP1’ is offline 00:00:19.913 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WCP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP23’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP27’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP31’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP33’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP41’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP42’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP46’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP53’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘ipxe-staging’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:19.913 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:04:38.419 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:38.421 [Pipeline] { 00:04:38.436 [Pipeline] catchError 00:04:38.439 [Pipeline] { 00:04:38.454 [Pipeline] wrap 00:04:38.462 [Pipeline] { 00:04:38.469 [Pipeline] stage 00:04:38.471 [Pipeline] { (Prologue) 00:04:38.492 [Pipeline] echo 00:04:38.494 Node: VM-host-SM0 00:04:38.501 [Pipeline] cleanWs 00:04:38.510 [WS-CLEANUP] Deleting project workspace... 00:04:38.510 [WS-CLEANUP] Deferred wipeout is used... 00:04:38.516 [WS-CLEANUP] done 00:04:38.724 [Pipeline] setCustomBuildProperty 00:04:38.787 [Pipeline] httpRequest 00:04:38.801 [Pipeline] echo 00:04:38.802 Sorcerer 10.211.164.101 is alive 00:04:38.808 [Pipeline] httpRequest 00:04:38.812 HttpMethod: GET 00:04:38.812 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:04:38.813 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:04:38.814 Response Code: HTTP/1.1 200 OK 00:04:38.814 Success: Status code 200 is in the accepted range: 200,404 00:04:38.815 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:04:38.957 [Pipeline] sh 00:04:39.233 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:04:39.250 [Pipeline] httpRequest 00:04:39.270 [Pipeline] echo 00:04:39.272 Sorcerer 10.211.164.101 is alive 00:04:39.281 [Pipeline] httpRequest 00:04:39.285 HttpMethod: GET 00:04:39.286 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:04:39.286 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:04:39.287 Response Code: HTTP/1.1 200 OK 00:04:39.288 Success: Status code 200 is in the accepted range: 200,404 00:04:39.288 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:04:41.454 [Pipeline] sh 00:04:41.724 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:04:45.040 [Pipeline] sh 00:04:45.314 + git -C spdk log --oneline -n5 00:04:45.315 719d03c6a sock/uring: only register net impl if supported 00:04:45.315 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:04:45.315 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:04:45.315 6c7c1f57e accel: add sequence outstanding stat 00:04:45.315 3bc8e6a26 accel: add utility to put task 00:04:45.335 [Pipeline] withCredentials 00:04:45.345 > git --version # timeout=10 00:04:45.356 > git --version # 'git version 2.39.2' 00:04:45.370 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:04:45.372 [Pipeline] { 00:04:45.383 [Pipeline] retry 00:04:45.385 [Pipeline] { 00:04:45.406 [Pipeline] sh 00:04:45.685 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:04:46.629 [Pipeline] } 00:04:46.653 [Pipeline] // retry 00:04:46.659 [Pipeline] } 00:04:46.682 [Pipeline] // withCredentials 00:04:46.691 [Pipeline] httpRequest 00:04:46.709 [Pipeline] echo 00:04:46.711 Sorcerer 10.211.164.101 is alive 00:04:46.719 [Pipeline] httpRequest 00:04:46.723 HttpMethod: GET 00:04:46.723 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:46.724 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:46.724 Response Code: HTTP/1.1 200 OK 00:04:46.725 Success: Status code 200 is in the accepted range: 200,404 00:04:46.725 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:47.952 [Pipeline] sh 00:04:48.230 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:04:50.151 [Pipeline] sh 00:04:50.433 + git -C dpdk log --oneline -n5 00:04:50.433 caf0f5d395 version: 22.11.4 00:04:50.433 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:04:50.433 dc9c799c7d vhost: fix missing spinlock unlock 00:04:50.433 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:04:50.433 6ef77f2a5e net/gve: fix RX buffer size alignment 00:04:50.520 [Pipeline] writeFile 00:04:50.548 [Pipeline] sh 00:04:50.826 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:50.841 [Pipeline] sh 00:04:51.121 + cat autorun-spdk.conf 00:04:51.121 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:51.121 SPDK_TEST_NVMF=1 00:04:51.121 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:51.121 SPDK_TEST_URING=1 00:04:51.121 SPDK_TEST_USDT=1 00:04:51.121 SPDK_RUN_UBSAN=1 00:04:51.121 NET_TYPE=virt 00:04:51.121 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:51.121 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:51.121 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:51.127 RUN_NIGHTLY=1 00:04:51.129 [Pipeline] } 00:04:51.145 [Pipeline] // stage 00:04:51.161 [Pipeline] stage 00:04:51.163 [Pipeline] { (Run VM) 00:04:51.177 [Pipeline] sh 00:04:51.450 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:51.450 + echo 'Start stage prepare_nvme.sh' 00:04:51.450 Start stage prepare_nvme.sh 00:04:51.450 + [[ -n 6 ]] 00:04:51.450 + disk_prefix=ex6 00:04:51.450 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:04:51.450 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:04:51.450 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:04:51.450 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:51.450 ++ SPDK_TEST_NVMF=1 00:04:51.450 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:51.450 ++ SPDK_TEST_URING=1 00:04:51.450 ++ SPDK_TEST_USDT=1 00:04:51.450 ++ SPDK_RUN_UBSAN=1 00:04:51.450 ++ NET_TYPE=virt 00:04:51.451 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:04:51.451 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:04:51.451 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:51.451 ++ RUN_NIGHTLY=1 00:04:51.451 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:51.451 + nvme_files=() 00:04:51.451 + declare -A nvme_files 00:04:51.451 + backend_dir=/var/lib/libvirt/images/backends 00:04:51.451 + nvme_files['nvme.img']=5G 00:04:51.451 + nvme_files['nvme-cmb.img']=5G 00:04:51.451 + nvme_files['nvme-multi0.img']=4G 00:04:51.451 + nvme_files['nvme-multi1.img']=4G 00:04:51.451 + nvme_files['nvme-multi2.img']=4G 00:04:51.451 + nvme_files['nvme-openstack.img']=8G 00:04:51.451 + nvme_files['nvme-zns.img']=5G 00:04:51.451 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:51.451 + (( SPDK_TEST_FTL == 1 )) 00:04:51.451 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:51.451 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:04:51.451 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:04:51.451 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:04:51.451 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:04:51.451 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:04:51.451 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:04:51.451 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:51.451 + for nvme in "${!nvme_files[@]}" 00:04:51.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:04:51.709 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:51.709 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:04:51.709 + echo 'End stage prepare_nvme.sh' 00:04:51.709 End stage prepare_nvme.sh 00:04:51.719 [Pipeline] sh 00:04:51.999 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:51.999 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:04:51.999 00:04:51.999 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:04:51.999 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:04:51.999 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:51.999 HELP=0 00:04:51.999 DRY_RUN=0 00:04:51.999 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:04:51.999 NVME_DISKS_TYPE=nvme,nvme, 00:04:51.999 NVME_AUTO_CREATE=0 00:04:51.999 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:04:51.999 NVME_CMB=,, 00:04:51.999 NVME_PMR=,, 00:04:51.999 NVME_ZNS=,, 00:04:51.999 NVME_MS=,, 00:04:51.999 NVME_FDP=,, 00:04:51.999 SPDK_VAGRANT_DISTRO=fedora38 00:04:51.999 SPDK_VAGRANT_VMCPU=10 00:04:51.999 SPDK_VAGRANT_VMRAM=12288 00:04:51.999 SPDK_VAGRANT_PROVIDER=libvirt 00:04:51.999 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:51.999 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:51.999 SPDK_OPENSTACK_NETWORK=0 00:04:51.999 VAGRANT_PACKAGE_BOX=0 00:04:51.999 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:51.999 FORCE_DISTRO=true 00:04:51.999 VAGRANT_BOX_VERSION= 00:04:51.999 EXTRA_VAGRANTFILES= 00:04:51.999 NIC_MODEL=e1000 00:04:51.999 00:04:51.999 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:04:51.999 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:04:55.277 Bringing machine 'default' up with 'libvirt' provider... 00:04:56.210 ==> default: Creating image (snapshot of base box volume). 00:04:56.210 ==> default: Creating domain with the following settings... 00:04:56.467 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720786584_d28473396ac7d57453d9 00:04:56.467 ==> default: -- Domain type: kvm 00:04:56.467 ==> default: -- Cpus: 10 00:04:56.467 ==> default: -- Feature: acpi 00:04:56.467 ==> default: -- Feature: apic 00:04:56.467 ==> default: -- Feature: pae 00:04:56.467 ==> default: -- Memory: 12288M 00:04:56.467 ==> default: -- Memory Backing: hugepages: 00:04:56.467 ==> default: -- Management MAC: 00:04:56.467 ==> default: -- Loader: 00:04:56.467 ==> default: -- Nvram: 00:04:56.467 ==> default: -- Base box: spdk/fedora38 00:04:56.467 ==> default: -- Storage pool: default 00:04:56.467 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720786584_d28473396ac7d57453d9.img (20G) 00:04:56.467 ==> default: -- Volume Cache: default 00:04:56.467 ==> default: -- Kernel: 00:04:56.467 ==> default: -- Initrd: 00:04:56.467 ==> default: -- Graphics Type: vnc 00:04:56.467 ==> default: -- Graphics Port: -1 00:04:56.467 ==> default: -- Graphics IP: 127.0.0.1 00:04:56.467 ==> default: -- Graphics Password: Not defined 00:04:56.467 ==> default: -- Video Type: cirrus 00:04:56.467 ==> default: -- Video VRAM: 9216 00:04:56.467 ==> default: -- Sound Type: 00:04:56.467 ==> default: -- Keymap: en-us 00:04:56.467 ==> default: -- TPM Path: 00:04:56.467 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:56.467 ==> default: -- Command line args: 00:04:56.467 ==> default: -> value=-device, 00:04:56.467 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:56.467 ==> default: -> value=-drive, 00:04:56.467 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:04:56.467 ==> default: -> value=-device, 00:04:56.467 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:56.467 ==> default: -> value=-device, 00:04:56.467 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:56.467 ==> default: -> value=-drive, 00:04:56.467 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:56.467 ==> default: -> value=-device, 00:04:56.467 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:56.467 ==> default: -> value=-drive, 00:04:56.467 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:56.467 ==> default: -> value=-device, 00:04:56.467 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:56.467 ==> default: -> value=-drive, 00:04:56.467 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:56.467 ==> default: -> value=-device, 00:04:56.467 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:56.467 ==> default: Creating shared folders metadata... 00:04:56.467 ==> default: Starting domain. 00:04:58.381 ==> default: Waiting for domain to get an IP address... 00:05:16.452 ==> default: Waiting for SSH to become available... 00:05:16.452 ==> default: Configuring and enabling network interfaces... 00:05:19.734 default: SSH address: 192.168.121.69:22 00:05:19.734 default: SSH username: vagrant 00:05:19.734 default: SSH auth method: private key 00:05:22.262 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:29.040 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:05:34.302 ==> default: Mounting SSHFS shared folder... 00:05:36.203 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:05:36.203 ==> default: Checking Mount.. 00:05:37.136 ==> default: Folder Successfully Mounted! 00:05:37.136 ==> default: Running provisioner: file... 00:05:38.094 default: ~/.gitconfig => .gitconfig 00:05:38.352 00:05:38.352 SUCCESS! 00:05:38.352 00:05:38.352 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:05:38.352 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:38.352 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:05:38.352 00:05:38.362 [Pipeline] } 00:05:38.380 [Pipeline] // stage 00:05:38.390 [Pipeline] dir 00:05:38.390 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:05:38.392 [Pipeline] { 00:05:38.407 [Pipeline] catchError 00:05:38.409 [Pipeline] { 00:05:38.426 [Pipeline] sh 00:05:38.707 + vagrant ssh-config --host vagrant 00:05:38.707 + sed -ne /^Host/,$p 00:05:38.707 + tee ssh_conf 00:05:42.895 Host vagrant 00:05:42.895 HostName 192.168.121.69 00:05:42.895 User vagrant 00:05:42.895 Port 22 00:05:42.895 UserKnownHostsFile /dev/null 00:05:42.895 StrictHostKeyChecking no 00:05:42.895 PasswordAuthentication no 00:05:42.895 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:05:42.895 IdentitiesOnly yes 00:05:42.895 LogLevel FATAL 00:05:42.895 ForwardAgent yes 00:05:42.895 ForwardX11 yes 00:05:42.895 00:05:42.909 [Pipeline] withEnv 00:05:42.912 [Pipeline] { 00:05:42.926 [Pipeline] sh 00:05:43.200 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:43.200 source /etc/os-release 00:05:43.200 [[ -e /image.version ]] && img=$(< /image.version) 00:05:43.200 # Minimal, systemd-like check. 00:05:43.200 if [[ -e /.dockerenv ]]; then 00:05:43.200 # Clear garbage from the node's name: 00:05:43.200 # agt-er_autotest_547-896 -> autotest_547-896 00:05:43.200 # $HOSTNAME is the actual container id 00:05:43.200 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:43.200 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:43.200 # We can assume this is a mount from a host where container is running, 00:05:43.200 # so fetch its hostname to easily identify the target swarm worker. 00:05:43.200 container="$(< /etc/hostname) ($agent)" 00:05:43.200 else 00:05:43.200 # Fallback 00:05:43.200 container=$agent 00:05:43.200 fi 00:05:43.200 fi 00:05:43.200 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:43.200 00:05:43.213 [Pipeline] } 00:05:43.236 [Pipeline] // withEnv 00:05:43.244 [Pipeline] setCustomBuildProperty 00:05:43.261 [Pipeline] stage 00:05:43.263 [Pipeline] { (Tests) 00:05:43.280 [Pipeline] sh 00:05:43.555 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:43.569 [Pipeline] sh 00:05:43.846 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:44.172 [Pipeline] timeout 00:05:44.172 Timeout set to expire in 30 min 00:05:44.174 [Pipeline] { 00:05:44.186 [Pipeline] sh 00:05:44.457 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:45.024 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:05:45.038 [Pipeline] sh 00:05:45.323 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:45.594 [Pipeline] sh 00:05:45.871 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:45.889 [Pipeline] sh 00:05:46.168 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:05:46.168 ++ readlink -f spdk_repo 00:05:46.168 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:46.168 + [[ -n /home/vagrant/spdk_repo ]] 00:05:46.168 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:46.168 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:46.168 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:46.168 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:46.168 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:46.168 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:05:46.168 + cd /home/vagrant/spdk_repo 00:05:46.168 + source /etc/os-release 00:05:46.168 ++ NAME='Fedora Linux' 00:05:46.168 ++ VERSION='38 (Cloud Edition)' 00:05:46.168 ++ ID=fedora 00:05:46.168 ++ VERSION_ID=38 00:05:46.168 ++ VERSION_CODENAME= 00:05:46.168 ++ PLATFORM_ID=platform:f38 00:05:46.168 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:05:46.168 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:46.168 ++ LOGO=fedora-logo-icon 00:05:46.168 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:05:46.168 ++ HOME_URL=https://fedoraproject.org/ 00:05:46.168 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:05:46.168 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:46.168 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:46.168 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:46.168 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:05:46.168 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:46.168 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:05:46.168 ++ SUPPORT_END=2024-05-14 00:05:46.168 ++ VARIANT='Cloud Edition' 00:05:46.168 ++ VARIANT_ID=cloud 00:05:46.168 + uname -a 00:05:46.168 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:05:46.168 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:46.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.734 Hugepages 00:05:46.734 node hugesize free / total 00:05:46.734 node0 1048576kB 0 / 0 00:05:46.734 node0 2048kB 0 / 0 00:05:46.734 00:05:46.734 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.734 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:46.734 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:46.734 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:46.734 + rm -f /tmp/spdk-ld-path 00:05:46.734 + source autorun-spdk.conf 00:05:46.734 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:46.734 ++ SPDK_TEST_NVMF=1 00:05:46.734 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:46.734 ++ SPDK_TEST_URING=1 00:05:46.734 ++ SPDK_TEST_USDT=1 00:05:46.734 ++ SPDK_RUN_UBSAN=1 00:05:46.734 ++ NET_TYPE=virt 00:05:46.734 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:46.734 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:46.734 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:46.734 ++ RUN_NIGHTLY=1 00:05:46.734 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:46.734 + [[ -n '' ]] 00:05:46.734 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:47.007 + for M in /var/spdk/build-*-manifest.txt 00:05:47.007 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:47.007 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:47.007 + for M in /var/spdk/build-*-manifest.txt 00:05:47.007 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:47.007 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:47.007 ++ uname 00:05:47.007 + [[ Linux == \L\i\n\u\x ]] 00:05:47.007 + sudo dmesg -T 00:05:47.007 + sudo dmesg --clear 00:05:47.007 + dmesg_pid=5898 00:05:47.007 + sudo dmesg -Tw 00:05:47.007 + [[ Fedora Linux == FreeBSD ]] 00:05:47.007 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:47.007 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:47.007 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:47.007 + [[ -x /usr/src/fio-static/fio ]] 00:05:47.007 + export FIO_BIN=/usr/src/fio-static/fio 00:05:47.007 + FIO_BIN=/usr/src/fio-static/fio 00:05:47.007 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:47.007 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:47.007 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:47.007 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:47.007 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:47.007 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:47.007 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:47.007 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:47.007 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:47.007 Test configuration: 00:05:47.007 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:47.007 SPDK_TEST_NVMF=1 00:05:47.007 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:47.007 SPDK_TEST_URING=1 00:05:47.007 SPDK_TEST_USDT=1 00:05:47.007 SPDK_RUN_UBSAN=1 00:05:47.007 NET_TYPE=virt 00:05:47.007 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:05:47.007 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:05:47.007 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:47.007 RUN_NIGHTLY=1 12:17:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.007 12:17:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:47.007 12:17:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.007 12:17:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.007 12:17:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.007 12:17:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.007 12:17:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.007 12:17:16 -- paths/export.sh@5 -- $ export PATH 00:05:47.007 12:17:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.007 12:17:16 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:47.007 12:17:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:05:47.007 12:17:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720786636.XXXXXX 00:05:47.007 12:17:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720786636.HpaYyT 00:05:47.007 12:17:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:05:47.007 12:17:16 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:05:47.007 12:17:16 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:47.007 12:17:16 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:05:47.007 12:17:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:47.007 12:17:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:47.007 12:17:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:05:47.007 12:17:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:05:47.007 12:17:16 -- common/autotest_common.sh@10 -- $ set +x 00:05:47.007 12:17:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:05:47.007 12:17:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:05:47.007 12:17:16 -- pm/common@17 -- $ local monitor 00:05:47.007 12:17:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.007 12:17:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:47.007 12:17:16 -- pm/common@21 -- $ date +%s 00:05:47.007 12:17:16 -- pm/common@25 -- $ sleep 1 00:05:47.007 12:17:16 -- pm/common@21 -- $ date +%s 00:05:47.007 12:17:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720786636 00:05:47.007 12:17:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720786636 00:05:47.007 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720786636_collect-vmstat.pm.log 00:05:47.265 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720786636_collect-cpu-load.pm.log 00:05:48.201 12:17:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:05:48.201 12:17:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:48.201 12:17:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:48.201 12:17:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:48.201 12:17:17 -- spdk/autobuild.sh@16 -- $ date -u 00:05:48.201 Fri Jul 12 12:17:17 PM UTC 2024 00:05:48.201 12:17:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:48.201 v24.09-pre-202-g719d03c6a 00:05:48.201 12:17:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:48.201 12:17:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:48.201 12:17:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:48.201 12:17:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:05:48.201 12:17:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:48.201 12:17:17 -- common/autotest_common.sh@10 -- $ set +x 00:05:48.201 ************************************ 00:05:48.201 START TEST ubsan 00:05:48.201 ************************************ 00:05:48.201 using ubsan 00:05:48.201 12:17:17 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:05:48.201 00:05:48.201 real 0m0.000s 00:05:48.201 user 0m0.000s 00:05:48.201 sys 0m0.000s 00:05:48.201 12:17:17 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:05:48.201 ************************************ 00:05:48.201 12:17:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:48.201 END TEST ubsan 00:05:48.201 ************************************ 00:05:48.201 12:17:17 -- common/autotest_common.sh@1142 -- $ return 0 00:05:48.201 12:17:17 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:05:48.201 12:17:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:05:48.201 12:17:17 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:05:48.201 12:17:17 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:05:48.201 12:17:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:05:48.201 12:17:17 -- common/autotest_common.sh@10 -- $ set +x 00:05:48.201 ************************************ 00:05:48.201 START TEST build_native_dpdk 00:05:48.201 ************************************ 00:05:48.201 12:17:17 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:05:48.201 caf0f5d395 version: 22.11.4 00:05:48.201 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:05:48.201 dc9c799c7d vhost: fix missing spinlock unlock 00:05:48.201 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:05:48.201 6ef77f2a5e net/gve: fix RX buffer size alignment 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:05:48.201 12:17:17 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:05:48.201 patching file config/rte_config.h 00:05:48.201 Hunk #1 succeeded at 60 (offset 1 line). 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:05:48.201 12:17:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:53.468 The Meson build system 00:05:53.468 Version: 1.3.1 00:05:53.468 Source dir: /home/vagrant/spdk_repo/dpdk 00:05:53.468 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:05:53.468 Build type: native build 00:05:53.468 Program cat found: YES (/usr/bin/cat) 00:05:53.468 Project name: DPDK 00:05:53.468 Project version: 22.11.4 00:05:53.468 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:53.468 C linker for the host machine: gcc ld.bfd 2.39-16 00:05:53.468 Host machine cpu family: x86_64 00:05:53.468 Host machine cpu: x86_64 00:05:53.468 Message: ## Building in Developer Mode ## 00:05:53.468 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:53.468 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:05:53.468 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:05:53.468 Program objdump found: YES (/usr/bin/objdump) 00:05:53.468 Program python3 found: YES (/usr/bin/python3) 00:05:53.468 Program cat found: YES (/usr/bin/cat) 00:05:53.468 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:05:53.468 Checking for size of "void *" : 8 00:05:53.468 Checking for size of "void *" : 8 (cached) 00:05:53.468 Library m found: YES 00:05:53.468 Library numa found: YES 00:05:53.468 Has header "numaif.h" : YES 00:05:53.468 Library fdt found: NO 00:05:53.468 Library execinfo found: NO 00:05:53.468 Has header "execinfo.h" : YES 00:05:53.468 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:53.468 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:53.468 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:53.468 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:53.468 Run-time dependency openssl found: YES 3.0.9 00:05:53.468 Run-time dependency libpcap found: YES 1.10.4 00:05:53.468 Has header "pcap.h" with dependency libpcap: YES 00:05:53.468 Compiler for C supports arguments -Wcast-qual: YES 00:05:53.468 Compiler for C supports arguments -Wdeprecated: YES 00:05:53.468 Compiler for C supports arguments -Wformat: YES 00:05:53.468 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:53.468 Compiler for C supports arguments -Wformat-security: NO 00:05:53.468 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:53.468 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:53.469 Compiler for C supports arguments -Wnested-externs: YES 00:05:53.469 Compiler for C supports arguments -Wold-style-definition: YES 00:05:53.469 Compiler for C supports arguments -Wpointer-arith: YES 00:05:53.469 Compiler for C supports arguments -Wsign-compare: YES 00:05:53.469 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:53.469 Compiler for C supports arguments -Wundef: YES 00:05:53.469 Compiler for C supports arguments -Wwrite-strings: YES 00:05:53.469 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:53.469 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:53.469 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:53.469 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:53.469 Compiler for C supports arguments -mavx512f: YES 00:05:53.469 Checking if "AVX512 checking" compiles: YES 00:05:53.469 Fetching value of define "__SSE4_2__" : 1 00:05:53.469 Fetching value of define "__AES__" : 1 00:05:53.469 Fetching value of define "__AVX__" : 1 00:05:53.469 Fetching value of define "__AVX2__" : 1 00:05:53.469 Fetching value of define "__AVX512BW__" : (undefined) 00:05:53.469 Fetching value of define "__AVX512CD__" : (undefined) 00:05:53.469 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:53.469 Fetching value of define "__AVX512F__" : (undefined) 00:05:53.469 Fetching value of define "__AVX512VL__" : (undefined) 00:05:53.469 Fetching value of define "__PCLMUL__" : 1 00:05:53.469 Fetching value of define "__RDRND__" : 1 00:05:53.469 Fetching value of define "__RDSEED__" : 1 00:05:53.469 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:53.469 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:53.469 Message: lib/kvargs: Defining dependency "kvargs" 00:05:53.469 Message: lib/telemetry: Defining dependency "telemetry" 00:05:53.469 Checking for function "getentropy" : YES 00:05:53.469 Message: lib/eal: Defining dependency "eal" 00:05:53.469 Message: lib/ring: Defining dependency "ring" 00:05:53.469 Message: lib/rcu: Defining dependency "rcu" 00:05:53.469 Message: lib/mempool: Defining dependency "mempool" 00:05:53.469 Message: lib/mbuf: Defining dependency "mbuf" 00:05:53.469 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:53.469 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:53.469 Compiler for C supports arguments -mpclmul: YES 00:05:53.469 Compiler for C supports arguments -maes: YES 00:05:53.469 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:53.469 Compiler for C supports arguments -mavx512bw: YES 00:05:53.469 Compiler for C supports arguments -mavx512dq: YES 00:05:53.469 Compiler for C supports arguments -mavx512vl: YES 00:05:53.469 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:53.469 Compiler for C supports arguments -mavx2: YES 00:05:53.469 Compiler for C supports arguments -mavx: YES 00:05:53.469 Message: lib/net: Defining dependency "net" 00:05:53.469 Message: lib/meter: Defining dependency "meter" 00:05:53.469 Message: lib/ethdev: Defining dependency "ethdev" 00:05:53.469 Message: lib/pci: Defining dependency "pci" 00:05:53.469 Message: lib/cmdline: Defining dependency "cmdline" 00:05:53.469 Message: lib/metrics: Defining dependency "metrics" 00:05:53.469 Message: lib/hash: Defining dependency "hash" 00:05:53.469 Message: lib/timer: Defining dependency "timer" 00:05:53.469 Fetching value of define "__AVX2__" : 1 (cached) 00:05:53.469 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:53.469 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:05:53.469 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:05:53.469 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:05:53.469 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:05:53.469 Message: lib/acl: Defining dependency "acl" 00:05:53.469 Message: lib/bbdev: Defining dependency "bbdev" 00:05:53.469 Message: lib/bitratestats: Defining dependency "bitratestats" 00:05:53.469 Run-time dependency libelf found: YES 0.190 00:05:53.469 Message: lib/bpf: Defining dependency "bpf" 00:05:53.469 Message: lib/cfgfile: Defining dependency "cfgfile" 00:05:53.469 Message: lib/compressdev: Defining dependency "compressdev" 00:05:53.469 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:53.469 Message: lib/distributor: Defining dependency "distributor" 00:05:53.469 Message: lib/efd: Defining dependency "efd" 00:05:53.469 Message: lib/eventdev: Defining dependency "eventdev" 00:05:53.469 Message: lib/gpudev: Defining dependency "gpudev" 00:05:53.469 Message: lib/gro: Defining dependency "gro" 00:05:53.469 Message: lib/gso: Defining dependency "gso" 00:05:53.469 Message: lib/ip_frag: Defining dependency "ip_frag" 00:05:53.469 Message: lib/jobstats: Defining dependency "jobstats" 00:05:53.469 Message: lib/latencystats: Defining dependency "latencystats" 00:05:53.469 Message: lib/lpm: Defining dependency "lpm" 00:05:53.469 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:53.469 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:05:53.469 Fetching value of define "__AVX512IFMA__" : (undefined) 00:05:53.469 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:05:53.469 Message: lib/member: Defining dependency "member" 00:05:53.469 Message: lib/pcapng: Defining dependency "pcapng" 00:05:53.469 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:53.469 Message: lib/power: Defining dependency "power" 00:05:53.469 Message: lib/rawdev: Defining dependency "rawdev" 00:05:53.469 Message: lib/regexdev: Defining dependency "regexdev" 00:05:53.469 Message: lib/dmadev: Defining dependency "dmadev" 00:05:53.469 Message: lib/rib: Defining dependency "rib" 00:05:53.469 Message: lib/reorder: Defining dependency "reorder" 00:05:53.469 Message: lib/sched: Defining dependency "sched" 00:05:53.469 Message: lib/security: Defining dependency "security" 00:05:53.469 Message: lib/stack: Defining dependency "stack" 00:05:53.469 Has header "linux/userfaultfd.h" : YES 00:05:53.469 Message: lib/vhost: Defining dependency "vhost" 00:05:53.469 Message: lib/ipsec: Defining dependency "ipsec" 00:05:53.469 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:53.469 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:05:53.469 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:05:53.469 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:53.469 Message: lib/fib: Defining dependency "fib" 00:05:53.469 Message: lib/port: Defining dependency "port" 00:05:53.469 Message: lib/pdump: Defining dependency "pdump" 00:05:53.469 Message: lib/table: Defining dependency "table" 00:05:53.469 Message: lib/pipeline: Defining dependency "pipeline" 00:05:53.469 Message: lib/graph: Defining dependency "graph" 00:05:53.469 Message: lib/node: Defining dependency "node" 00:05:53.469 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:53.469 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:53.469 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:53.469 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:53.469 Compiler for C supports arguments -Wno-sign-compare: YES 00:05:53.469 Compiler for C supports arguments -Wno-unused-value: YES 00:05:53.469 Compiler for C supports arguments -Wno-format: YES 00:05:53.469 Compiler for C supports arguments -Wno-format-security: YES 00:05:53.469 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:05:54.407 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:54.407 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:05:54.407 Compiler for C supports arguments -Wno-unused-parameter: YES 00:05:54.407 Fetching value of define "__AVX2__" : 1 (cached) 00:05:54.407 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:54.407 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:54.407 Compiler for C supports arguments -mavx512bw: YES (cached) 00:05:54.407 Compiler for C supports arguments -march=skylake-avx512: YES 00:05:54.407 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:05:54.407 Program doxygen found: YES (/usr/bin/doxygen) 00:05:54.407 Configuring doxy-api.conf using configuration 00:05:54.407 Program sphinx-build found: NO 00:05:54.407 Configuring rte_build_config.h using configuration 00:05:54.407 Message: 00:05:54.407 ================= 00:05:54.407 Applications Enabled 00:05:54.407 ================= 00:05:54.407 00:05:54.407 apps: 00:05:54.408 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:05:54.408 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:05:54.408 test-security-perf, 00:05:54.408 00:05:54.408 Message: 00:05:54.408 ================= 00:05:54.408 Libraries Enabled 00:05:54.408 ================= 00:05:54.408 00:05:54.408 libs: 00:05:54.408 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:05:54.408 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:05:54.408 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:05:54.408 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:05:54.408 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:05:54.408 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:05:54.408 table, pipeline, graph, node, 00:05:54.408 00:05:54.408 Message: 00:05:54.408 =============== 00:05:54.408 Drivers Enabled 00:05:54.408 =============== 00:05:54.408 00:05:54.408 common: 00:05:54.408 00:05:54.408 bus: 00:05:54.408 pci, vdev, 00:05:54.408 mempool: 00:05:54.408 ring, 00:05:54.408 dma: 00:05:54.408 00:05:54.408 net: 00:05:54.408 i40e, 00:05:54.408 raw: 00:05:54.408 00:05:54.408 crypto: 00:05:54.408 00:05:54.408 compress: 00:05:54.408 00:05:54.408 regex: 00:05:54.408 00:05:54.408 vdpa: 00:05:54.408 00:05:54.408 event: 00:05:54.408 00:05:54.408 baseband: 00:05:54.408 00:05:54.408 gpu: 00:05:54.408 00:05:54.408 00:05:54.408 Message: 00:05:54.408 ================= 00:05:54.408 Content Skipped 00:05:54.408 ================= 00:05:54.408 00:05:54.408 apps: 00:05:54.408 00:05:54.408 libs: 00:05:54.408 kni: explicitly disabled via build config (deprecated lib) 00:05:54.408 flow_classify: explicitly disabled via build config (deprecated lib) 00:05:54.408 00:05:54.408 drivers: 00:05:54.408 common/cpt: not in enabled drivers build config 00:05:54.408 common/dpaax: not in enabled drivers build config 00:05:54.408 common/iavf: not in enabled drivers build config 00:05:54.408 common/idpf: not in enabled drivers build config 00:05:54.408 common/mvep: not in enabled drivers build config 00:05:54.408 common/octeontx: not in enabled drivers build config 00:05:54.408 bus/auxiliary: not in enabled drivers build config 00:05:54.408 bus/dpaa: not in enabled drivers build config 00:05:54.408 bus/fslmc: not in enabled drivers build config 00:05:54.408 bus/ifpga: not in enabled drivers build config 00:05:54.408 bus/vmbus: not in enabled drivers build config 00:05:54.408 common/cnxk: not in enabled drivers build config 00:05:54.408 common/mlx5: not in enabled drivers build config 00:05:54.408 common/qat: not in enabled drivers build config 00:05:54.408 common/sfc_efx: not in enabled drivers build config 00:05:54.408 mempool/bucket: not in enabled drivers build config 00:05:54.408 mempool/cnxk: not in enabled drivers build config 00:05:54.408 mempool/dpaa: not in enabled drivers build config 00:05:54.408 mempool/dpaa2: not in enabled drivers build config 00:05:54.408 mempool/octeontx: not in enabled drivers build config 00:05:54.408 mempool/stack: not in enabled drivers build config 00:05:54.408 dma/cnxk: not in enabled drivers build config 00:05:54.408 dma/dpaa: not in enabled drivers build config 00:05:54.408 dma/dpaa2: not in enabled drivers build config 00:05:54.408 dma/hisilicon: not in enabled drivers build config 00:05:54.408 dma/idxd: not in enabled drivers build config 00:05:54.408 dma/ioat: not in enabled drivers build config 00:05:54.408 dma/skeleton: not in enabled drivers build config 00:05:54.408 net/af_packet: not in enabled drivers build config 00:05:54.408 net/af_xdp: not in enabled drivers build config 00:05:54.408 net/ark: not in enabled drivers build config 00:05:54.408 net/atlantic: not in enabled drivers build config 00:05:54.408 net/avp: not in enabled drivers build config 00:05:54.408 net/axgbe: not in enabled drivers build config 00:05:54.408 net/bnx2x: not in enabled drivers build config 00:05:54.408 net/bnxt: not in enabled drivers build config 00:05:54.408 net/bonding: not in enabled drivers build config 00:05:54.408 net/cnxk: not in enabled drivers build config 00:05:54.408 net/cxgbe: not in enabled drivers build config 00:05:54.408 net/dpaa: not in enabled drivers build config 00:05:54.408 net/dpaa2: not in enabled drivers build config 00:05:54.408 net/e1000: not in enabled drivers build config 00:05:54.408 net/ena: not in enabled drivers build config 00:05:54.408 net/enetc: not in enabled drivers build config 00:05:54.408 net/enetfec: not in enabled drivers build config 00:05:54.408 net/enic: not in enabled drivers build config 00:05:54.408 net/failsafe: not in enabled drivers build config 00:05:54.408 net/fm10k: not in enabled drivers build config 00:05:54.408 net/gve: not in enabled drivers build config 00:05:54.408 net/hinic: not in enabled drivers build config 00:05:54.408 net/hns3: not in enabled drivers build config 00:05:54.408 net/iavf: not in enabled drivers build config 00:05:54.408 net/ice: not in enabled drivers build config 00:05:54.408 net/idpf: not in enabled drivers build config 00:05:54.408 net/igc: not in enabled drivers build config 00:05:54.408 net/ionic: not in enabled drivers build config 00:05:54.408 net/ipn3ke: not in enabled drivers build config 00:05:54.408 net/ixgbe: not in enabled drivers build config 00:05:54.408 net/kni: not in enabled drivers build config 00:05:54.408 net/liquidio: not in enabled drivers build config 00:05:54.408 net/mana: not in enabled drivers build config 00:05:54.408 net/memif: not in enabled drivers build config 00:05:54.408 net/mlx4: not in enabled drivers build config 00:05:54.408 net/mlx5: not in enabled drivers build config 00:05:54.408 net/mvneta: not in enabled drivers build config 00:05:54.408 net/mvpp2: not in enabled drivers build config 00:05:54.408 net/netvsc: not in enabled drivers build config 00:05:54.408 net/nfb: not in enabled drivers build config 00:05:54.408 net/nfp: not in enabled drivers build config 00:05:54.408 net/ngbe: not in enabled drivers build config 00:05:54.408 net/null: not in enabled drivers build config 00:05:54.408 net/octeontx: not in enabled drivers build config 00:05:54.408 net/octeon_ep: not in enabled drivers build config 00:05:54.408 net/pcap: not in enabled drivers build config 00:05:54.408 net/pfe: not in enabled drivers build config 00:05:54.408 net/qede: not in enabled drivers build config 00:05:54.408 net/ring: not in enabled drivers build config 00:05:54.408 net/sfc: not in enabled drivers build config 00:05:54.408 net/softnic: not in enabled drivers build config 00:05:54.408 net/tap: not in enabled drivers build config 00:05:54.408 net/thunderx: not in enabled drivers build config 00:05:54.408 net/txgbe: not in enabled drivers build config 00:05:54.408 net/vdev_netvsc: not in enabled drivers build config 00:05:54.408 net/vhost: not in enabled drivers build config 00:05:54.408 net/virtio: not in enabled drivers build config 00:05:54.408 net/vmxnet3: not in enabled drivers build config 00:05:54.408 raw/cnxk_bphy: not in enabled drivers build config 00:05:54.408 raw/cnxk_gpio: not in enabled drivers build config 00:05:54.408 raw/dpaa2_cmdif: not in enabled drivers build config 00:05:54.408 raw/ifpga: not in enabled drivers build config 00:05:54.408 raw/ntb: not in enabled drivers build config 00:05:54.408 raw/skeleton: not in enabled drivers build config 00:05:54.408 crypto/armv8: not in enabled drivers build config 00:05:54.408 crypto/bcmfs: not in enabled drivers build config 00:05:54.408 crypto/caam_jr: not in enabled drivers build config 00:05:54.408 crypto/ccp: not in enabled drivers build config 00:05:54.408 crypto/cnxk: not in enabled drivers build config 00:05:54.408 crypto/dpaa_sec: not in enabled drivers build config 00:05:54.408 crypto/dpaa2_sec: not in enabled drivers build config 00:05:54.408 crypto/ipsec_mb: not in enabled drivers build config 00:05:54.408 crypto/mlx5: not in enabled drivers build config 00:05:54.408 crypto/mvsam: not in enabled drivers build config 00:05:54.408 crypto/nitrox: not in enabled drivers build config 00:05:54.408 crypto/null: not in enabled drivers build config 00:05:54.408 crypto/octeontx: not in enabled drivers build config 00:05:54.408 crypto/openssl: not in enabled drivers build config 00:05:54.408 crypto/scheduler: not in enabled drivers build config 00:05:54.408 crypto/uadk: not in enabled drivers build config 00:05:54.408 crypto/virtio: not in enabled drivers build config 00:05:54.408 compress/isal: not in enabled drivers build config 00:05:54.408 compress/mlx5: not in enabled drivers build config 00:05:54.408 compress/octeontx: not in enabled drivers build config 00:05:54.408 compress/zlib: not in enabled drivers build config 00:05:54.408 regex/mlx5: not in enabled drivers build config 00:05:54.408 regex/cn9k: not in enabled drivers build config 00:05:54.408 vdpa/ifc: not in enabled drivers build config 00:05:54.408 vdpa/mlx5: not in enabled drivers build config 00:05:54.408 vdpa/sfc: not in enabled drivers build config 00:05:54.408 event/cnxk: not in enabled drivers build config 00:05:54.408 event/dlb2: not in enabled drivers build config 00:05:54.408 event/dpaa: not in enabled drivers build config 00:05:54.408 event/dpaa2: not in enabled drivers build config 00:05:54.408 event/dsw: not in enabled drivers build config 00:05:54.408 event/opdl: not in enabled drivers build config 00:05:54.408 event/skeleton: not in enabled drivers build config 00:05:54.408 event/sw: not in enabled drivers build config 00:05:54.408 event/octeontx: not in enabled drivers build config 00:05:54.408 baseband/acc: not in enabled drivers build config 00:05:54.408 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:05:54.408 baseband/fpga_lte_fec: not in enabled drivers build config 00:05:54.408 baseband/la12xx: not in enabled drivers build config 00:05:54.408 baseband/null: not in enabled drivers build config 00:05:54.408 baseband/turbo_sw: not in enabled drivers build config 00:05:54.408 gpu/cuda: not in enabled drivers build config 00:05:54.408 00:05:54.408 00:05:54.408 Build targets in project: 314 00:05:54.408 00:05:54.408 DPDK 22.11.4 00:05:54.408 00:05:54.408 User defined options 00:05:54.408 libdir : lib 00:05:54.408 prefix : /home/vagrant/spdk_repo/dpdk/build 00:05:54.408 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:05:54.408 c_link_args : 00:05:54.408 enable_docs : false 00:05:54.408 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:05:54.408 enable_kmods : false 00:05:54.408 machine : native 00:05:54.409 tests : false 00:05:54.409 00:05:54.409 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:54.409 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:05:54.670 12:17:23 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:05:54.670 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:05:54.670 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:05:54.670 [2/743] Generating lib/rte_kvargs_def with a custom command 00:05:54.670 [3/743] Generating lib/rte_telemetry_def with a custom command 00:05:54.670 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:05:54.670 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:54.670 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:54.670 [7/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:54.670 [8/743] Linking static target lib/librte_kvargs.a 00:05:54.935 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:54.935 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:54.935 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:54.935 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:54.935 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:54.935 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:54.935 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:54.935 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:54.935 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:54.935 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:54.935 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:54.935 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.193 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:05:55.193 [22/743] Linking target lib/librte_kvargs.so.23.0 00:05:55.193 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:55.193 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:55.193 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:55.193 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:55.193 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:55.193 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:55.193 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:55.193 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:55.193 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:55.451 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:55.451 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:55.451 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:55.451 [35/743] Linking static target lib/librte_telemetry.a 00:05:55.451 [36/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:05:55.451 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:55.451 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:55.451 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:55.451 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:55.451 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:55.725 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:55.725 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:55.725 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:55.725 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.725 [46/743] Linking target lib/librte_telemetry.so.23.0 00:05:55.725 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:55.725 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:55.725 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:05:55.725 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:55.725 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:55.725 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:55.983 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:55.983 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:55.983 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:55.983 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:55.983 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:55.983 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:55.983 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:55.983 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:55.983 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:55.983 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:55.983 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:55.983 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:55.983 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:05:55.983 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:56.241 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:56.241 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:56.241 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:56.241 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:56.241 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:56.241 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:56.241 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:56.241 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:56.241 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:56.241 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:56.241 [77/743] Generating lib/rte_eal_def with a custom command 00:05:56.241 [78/743] Generating lib/rte_eal_mingw with a custom command 00:05:56.241 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:56.241 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:56.241 [81/743] Generating lib/rte_ring_def with a custom command 00:05:56.241 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:56.241 [83/743] Generating lib/rte_ring_mingw with a custom command 00:05:56.241 [84/743] Generating lib/rte_rcu_def with a custom command 00:05:56.241 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:05:56.241 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:56.499 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:56.499 [88/743] Linking static target lib/librte_ring.a 00:05:56.499 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:56.499 [90/743] Generating lib/rte_mempool_def with a custom command 00:05:56.499 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:05:56.499 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:56.499 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:56.756 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.756 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:56.756 [96/743] Linking static target lib/librte_eal.a 00:05:57.014 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:57.014 [98/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:57.014 [99/743] Generating lib/rte_mbuf_def with a custom command 00:05:57.014 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:05:57.014 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:57.272 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:57.272 [103/743] Linking static target lib/librte_rcu.a 00:05:57.272 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:57.272 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:57.529 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:57.529 [107/743] Linking static target lib/librte_mempool.a 00:05:57.529 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.529 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:57.529 [110/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:57.529 [111/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:57.529 [112/743] Generating lib/rte_net_def with a custom command 00:05:57.529 [113/743] Generating lib/rte_net_mingw with a custom command 00:05:57.787 [114/743] Generating lib/rte_meter_def with a custom command 00:05:57.787 [115/743] Generating lib/rte_meter_mingw with a custom command 00:05:57.787 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:57.787 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:57.787 [118/743] Linking static target lib/librte_meter.a 00:05:57.787 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:57.787 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:58.045 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:58.045 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.045 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:58.045 [124/743] Linking static target lib/librte_mbuf.a 00:05:58.045 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:58.045 [126/743] Linking static target lib/librte_net.a 00:05:58.302 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.302 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.560 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:58.560 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:58.560 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:58.560 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:58.560 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.560 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:58.818 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:59.080 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:59.348 [137/743] Generating lib/rte_ethdev_def with a custom command 00:05:59.348 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:05:59.348 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:59.348 [140/743] Generating lib/rte_pci_def with a custom command 00:05:59.348 [141/743] Generating lib/rte_pci_mingw with a custom command 00:05:59.348 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:59.348 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:59.348 [144/743] Linking static target lib/librte_pci.a 00:05:59.348 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:59.348 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:59.348 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:59.348 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:59.605 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:59.605 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.605 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:59.605 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:59.605 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:59.605 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:59.605 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:59.605 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:59.605 [157/743] Generating lib/rte_cmdline_def with a custom command 00:05:59.605 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:05:59.605 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:59.863 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:59.863 [161/743] Generating lib/rte_metrics_def with a custom command 00:05:59.863 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:05:59.863 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:59.863 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:05:59.863 [165/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:59.863 [166/743] Generating lib/rte_hash_def with a custom command 00:05:59.863 [167/743] Generating lib/rte_hash_mingw with a custom command 00:05:59.863 [168/743] Generating lib/rte_timer_def with a custom command 00:06:00.121 [169/743] Generating lib/rte_timer_mingw with a custom command 00:06:00.121 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:00.121 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:00.121 [172/743] Linking static target lib/librte_cmdline.a 00:06:00.121 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:00.378 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:06:00.378 [175/743] Linking static target lib/librte_metrics.a 00:06:00.378 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:00.378 [177/743] Linking static target lib/librte_timer.a 00:06:00.636 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.893 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.894 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:00.894 [181/743] Linking static target lib/librte_ethdev.a 00:06:00.894 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:00.894 [183/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:06:00.894 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.458 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:06:01.458 [186/743] Generating lib/rte_acl_def with a custom command 00:06:01.458 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:06:01.458 [188/743] Generating lib/rte_acl_mingw with a custom command 00:06:01.458 [189/743] Generating lib/rte_bbdev_def with a custom command 00:06:01.458 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:06:01.716 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:06:01.716 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:06:01.716 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:06:01.974 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:06:02.236 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:06:02.236 [196/743] Linking static target lib/librte_bitratestats.a 00:06:02.236 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:06:02.492 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.492 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:06:02.492 [200/743] Linking static target lib/librte_bbdev.a 00:06:02.750 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:06:02.750 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:02.750 [203/743] Linking static target lib/librte_hash.a 00:06:03.008 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:06:03.008 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:06:03.008 [206/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.266 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:06:03.266 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:06:03.266 [209/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:06:03.523 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.523 [211/743] Generating lib/rte_bpf_def with a custom command 00:06:03.523 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:06:03.781 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:06:03.781 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:06:03.781 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:06:03.781 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:06:03.781 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:06:03.781 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:06:03.781 [219/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:06:03.781 [220/743] Linking static target lib/librte_acl.a 00:06:03.781 [221/743] Linking static target lib/librte_cfgfile.a 00:06:04.074 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:06:04.074 [223/743] Generating lib/rte_compressdev_def with a custom command 00:06:04.074 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:06:04.074 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.074 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.074 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.331 [228/743] Linking target lib/librte_eal.so.23.0 00:06:04.331 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:06:04.331 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:06:04.331 [231/743] Generating lib/rte_cryptodev_mingw with a custom command 00:06:04.331 [232/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:04.331 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:06:04.331 [234/743] Linking target lib/librte_ring.so.23.0 00:06:04.588 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:06:04.588 [236/743] Linking target lib/librte_meter.so.23.0 00:06:04.588 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:04.588 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:06:04.588 [239/743] Linking target lib/librte_pci.so.23.0 00:06:04.588 [240/743] Linking target lib/librte_rcu.so.23.0 00:06:04.588 [241/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:04.588 [242/743] Linking target lib/librte_mempool.so.23.0 00:06:04.588 [243/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:06:04.588 [244/743] Linking target lib/librte_timer.so.23.0 00:06:04.588 [245/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:04.846 [246/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:06:04.846 [247/743] Linking static target lib/librte_bpf.a 00:06:04.846 [248/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:06:04.846 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:06:04.846 [250/743] Linking target lib/librte_acl.so.23.0 00:06:04.846 [251/743] Linking target lib/librte_cfgfile.so.23.0 00:06:04.846 [252/743] Linking static target lib/librte_compressdev.a 00:06:04.846 [253/743] Linking target lib/librte_mbuf.so.23.0 00:06:04.846 [254/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:06:04.846 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:06:04.846 [256/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:06:04.846 [257/743] Generating lib/rte_distributor_def with a custom command 00:06:04.846 [258/743] Linking target lib/librte_net.so.23.0 00:06:04.846 [259/743] Linking target lib/librte_bbdev.so.23.0 00:06:05.103 [260/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:05.103 [261/743] Generating lib/rte_distributor_mingw with a custom command 00:06:05.103 [262/743] Generating lib/rte_efd_def with a custom command 00:06:05.103 [263/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.103 [264/743] Generating lib/rte_efd_mingw with a custom command 00:06:05.103 [265/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:06:05.103 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:06:05.103 [267/743] Linking target lib/librte_cmdline.so.23.0 00:06:05.103 [268/743] Linking target lib/librte_hash.so.23.0 00:06:05.358 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:06:05.358 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:06:05.358 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:06:05.358 [272/743] Linking static target lib/librte_distributor.a 00:06:05.616 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.616 [274/743] Linking target lib/librte_ethdev.so.23.0 00:06:05.616 [275/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.616 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:06:05.616 [277/743] Linking target lib/librte_compressdev.so.23.0 00:06:05.616 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:06:05.616 [279/743] Linking target lib/librte_distributor.so.23.0 00:06:05.873 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:06:05.873 [281/743] Linking target lib/librte_metrics.so.23.0 00:06:05.873 [282/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:06:05.873 [283/743] Linking target lib/librte_bpf.so.23.0 00:06:05.873 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:06:05.873 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:06:05.873 [286/743] Generating lib/rte_eventdev_def with a custom command 00:06:05.873 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:06:06.131 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:06:06.131 [289/743] Generating lib/rte_gpudev_def with a custom command 00:06:06.131 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:06:06.389 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:06:06.647 [292/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:06:06.647 [293/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:06.647 [294/743] Linking static target lib/librte_cryptodev.a 00:06:06.647 [295/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:06:06.647 [296/743] Linking static target lib/librte_efd.a 00:06:06.647 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:06:06.904 [298/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:06:06.904 [299/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:06:06.905 [300/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:06:06.905 [301/743] Linking target lib/librte_efd.so.23.0 00:06:06.905 [302/743] Linking static target lib/librte_gpudev.a 00:06:06.905 [303/743] Generating lib/rte_gro_def with a custom command 00:06:06.905 [304/743] Generating lib/rte_gro_mingw with a custom command 00:06:06.905 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:06:07.163 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:06:07.422 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:06:07.422 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:06:07.680 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:06:07.680 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:06:07.680 [311/743] Generating lib/rte_gso_def with a custom command 00:06:07.680 [312/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.680 [313/743] Generating lib/rte_gso_mingw with a custom command 00:06:07.680 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:06:07.680 [315/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:06:07.680 [316/743] Linking target lib/librte_gpudev.so.23.0 00:06:07.680 [317/743] Linking static target lib/librte_gro.a 00:06:07.938 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:06:07.938 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.938 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:06:07.938 [321/743] Linking target lib/librte_gro.so.23.0 00:06:07.938 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:06:07.938 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:06:08.195 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:06:08.195 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:06:08.195 [326/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:06:08.195 [327/743] Linking static target lib/librte_jobstats.a 00:06:08.196 [328/743] Linking static target lib/librte_eventdev.a 00:06:08.196 [329/743] Generating lib/rte_jobstats_def with a custom command 00:06:08.196 [330/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:06:08.196 [331/743] Linking static target lib/librte_gso.a 00:06:08.196 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:06:08.453 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:06:08.453 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.453 [335/743] Linking target lib/librte_gso.so.23.0 00:06:08.453 [336/743] Generating lib/rte_latencystats_def with a custom command 00:06:08.453 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:06:08.453 [338/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.714 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:06:08.714 [340/743] Linking target lib/librte_jobstats.so.23.0 00:06:08.714 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:06:08.714 [342/743] Generating lib/rte_lpm_def with a custom command 00:06:08.714 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:06:08.714 [344/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:08.714 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:06:08.714 [346/743] Linking target lib/librte_cryptodev.so.23.0 00:06:08.714 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:06:08.714 [348/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:06:08.971 [349/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:06:08.971 [350/743] Linking static target lib/librte_ip_frag.a 00:06:09.229 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:06:09.229 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:06:09.229 [353/743] Linking static target lib/librte_latencystats.a 00:06:09.229 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:06:09.229 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:06:09.229 [356/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:06:09.229 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:06:09.229 [358/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:06:09.229 [359/743] Generating lib/rte_member_def with a custom command 00:06:09.487 [360/743] Generating lib/rte_member_mingw with a custom command 00:06:09.487 [361/743] Generating lib/rte_pcapng_def with a custom command 00:06:09.487 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:06:09.487 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:06:09.487 [364/743] Linking target lib/librte_latencystats.so.23.0 00:06:09.487 [365/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:06:09.487 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:09.487 [367/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:06:09.487 [368/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:09.487 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:09.745 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:09.745 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:06:10.002 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:06:10.002 [373/743] Generating lib/rte_power_def with a custom command 00:06:10.002 [374/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:06:10.002 [375/743] Linking static target lib/librte_lpm.a 00:06:10.002 [376/743] Generating lib/rte_power_mingw with a custom command 00:06:10.002 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.259 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:10.259 [379/743] Generating lib/rte_rawdev_def with a custom command 00:06:10.259 [380/743] Linking target lib/librte_eventdev.so.23.0 00:06:10.260 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:06:10.260 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:10.260 [383/743] Generating lib/rte_regexdev_def with a custom command 00:06:10.260 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:06:10.260 [385/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:06:10.260 [386/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:06:10.260 [387/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:06:10.260 [388/743] Generating lib/rte_dmadev_def with a custom command 00:06:10.260 [389/743] Linking static target lib/librte_pcapng.a 00:06:10.260 [390/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.260 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:06:10.260 [392/743] Linking target lib/librte_lpm.so.23.0 00:06:10.517 [393/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:06:10.517 [394/743] Linking static target lib/librte_rawdev.a 00:06:10.517 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:10.517 [396/743] Generating lib/rte_rib_def with a custom command 00:06:10.517 [397/743] Generating lib/rte_rib_mingw with a custom command 00:06:10.517 [398/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:06:10.517 [399/743] Generating lib/rte_reorder_def with a custom command 00:06:10.517 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:06:10.517 [401/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:10.517 [402/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.517 [403/743] Linking static target lib/librte_power.a 00:06:10.776 [404/743] Linking target lib/librte_pcapng.so.23.0 00:06:10.776 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:10.776 [406/743] Linking static target lib/librte_dmadev.a 00:06:10.776 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:06:10.776 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.776 [409/743] Linking target lib/librte_rawdev.so.23.0 00:06:11.034 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:06:11.035 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:06:11.035 [412/743] Linking static target lib/librte_regexdev.a 00:06:11.035 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:06:11.035 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:06:11.035 [415/743] Generating lib/rte_sched_def with a custom command 00:06:11.035 [416/743] Generating lib/rte_sched_mingw with a custom command 00:06:11.035 [417/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:06:11.035 [418/743] Generating lib/rte_security_def with a custom command 00:06:11.035 [419/743] Linking static target lib/librte_member.a 00:06:11.035 [420/743] Generating lib/rte_security_mingw with a custom command 00:06:11.035 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:06:11.292 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.292 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:06:11.292 [424/743] Linking target lib/librte_dmadev.so.23.0 00:06:11.292 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:06:11.292 [426/743] Generating lib/rte_stack_def with a custom command 00:06:11.292 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:11.292 [428/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:06:11.292 [429/743] Linking static target lib/librte_reorder.a 00:06:11.292 [430/743] Generating lib/rte_stack_mingw with a custom command 00:06:11.292 [431/743] Linking static target lib/librte_stack.a 00:06:11.292 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:06:11.550 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.550 [434/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:11.550 [435/743] Linking target lib/librte_member.so.23.0 00:06:11.550 [436/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:06:11.550 [437/743] Linking static target lib/librte_rib.a 00:06:11.550 [438/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.550 [439/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.550 [440/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.550 [441/743] Linking target lib/librte_stack.so.23.0 00:06:11.550 [442/743] Linking target lib/librte_reorder.so.23.0 00:06:11.550 [443/743] Linking target lib/librte_power.so.23.0 00:06:11.808 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.808 [445/743] Linking target lib/librte_regexdev.so.23.0 00:06:12.065 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:12.065 [447/743] Linking static target lib/librte_security.a 00:06:12.065 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.065 [449/743] Linking target lib/librte_rib.so.23.0 00:06:12.065 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:06:12.065 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:12.323 [452/743] Generating lib/rte_vhost_def with a custom command 00:06:12.323 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:06:12.323 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:12.323 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.323 [456/743] Linking target lib/librte_security.so.23.0 00:06:12.621 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:12.621 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:06:12.621 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:06:12.621 [460/743] Linking static target lib/librte_sched.a 00:06:13.186 [461/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:06:13.186 [462/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:13.186 [463/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.186 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:06:13.186 [465/743] Generating lib/rte_ipsec_def with a custom command 00:06:13.186 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:06:13.186 [467/743] Linking target lib/librte_sched.so.23.0 00:06:13.186 [468/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:13.186 [469/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:06:13.445 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:06:13.445 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:06:13.703 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:06:13.703 [473/743] Generating lib/rte_fib_def with a custom command 00:06:13.703 [474/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:06:13.703 [475/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:06:13.703 [476/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:06:13.703 [477/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:06:13.703 [478/743] Generating lib/rte_fib_mingw with a custom command 00:06:13.961 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:06:13.961 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:06:14.218 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:06:14.218 [482/743] Linking static target lib/librte_ipsec.a 00:06:14.475 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.475 [484/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:06:14.475 [485/743] Linking target lib/librte_ipsec.so.23.0 00:06:14.733 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:06:14.733 [487/743] Linking static target lib/librte_fib.a 00:06:14.733 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:06:14.733 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:06:14.733 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:06:14.733 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:06:14.991 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:06:14.991 [493/743] Linking target lib/librte_fib.so.23.0 00:06:14.991 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:06:15.581 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:06:15.581 [496/743] Generating lib/rte_port_def with a custom command 00:06:15.581 [497/743] Generating lib/rte_port_mingw with a custom command 00:06:15.581 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:06:15.839 [499/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:06:15.839 [500/743] Generating lib/rte_pdump_def with a custom command 00:06:15.839 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:06:15.839 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:06:15.839 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:06:15.839 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:06:16.097 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:06:16.097 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:06:16.097 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:06:16.097 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:06:16.354 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:06:16.354 [510/743] Linking static target lib/librte_port.a 00:06:16.611 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:06:16.867 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:06:16.867 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:06:16.867 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.867 [515/743] Linking target lib/librte_port.so.23.0 00:06:16.867 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:06:16.867 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:06:16.867 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:06:16.867 [519/743] Linking static target lib/librte_pdump.a 00:06:16.867 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:06:17.125 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.125 [522/743] Linking target lib/librte_pdump.so.23.0 00:06:17.381 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:06:17.382 [524/743] Generating lib/rte_table_def with a custom command 00:06:17.638 [525/743] Generating lib/rte_table_mingw with a custom command 00:06:17.638 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:06:17.638 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:06:17.638 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:06:17.894 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:06:17.894 [530/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:17.894 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:06:17.894 [532/743] Generating lib/rte_pipeline_def with a custom command 00:06:18.151 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:06:18.151 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:06:18.151 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:06:18.151 [536/743] Linking static target lib/librte_table.a 00:06:18.151 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:06:18.714 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:06:18.714 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:06:18.715 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:06:18.715 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:06:18.715 [542/743] Linking target lib/librte_table.so.23.0 00:06:18.715 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:06:18.972 [544/743] Generating lib/rte_graph_def with a custom command 00:06:18.972 [545/743] Generating lib/rte_graph_mingw with a custom command 00:06:18.972 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:06:19.229 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:06:19.230 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:06:19.487 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:06:19.487 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:06:19.487 [551/743] Linking static target lib/librte_graph.a 00:06:19.487 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:06:19.745 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:06:19.745 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:06:20.002 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:06:20.259 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:06:20.259 [557/743] Generating lib/rte_node_def with a custom command 00:06:20.259 [558/743] Generating lib/rte_node_mingw with a custom command 00:06:20.259 [559/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:06:20.259 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:20.259 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.516 [562/743] Linking target lib/librte_graph.so.23.0 00:06:20.516 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:20.516 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:06:20.516 [565/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:06:20.516 [566/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:06:20.516 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:06:20.516 [568/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:20.516 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:06:20.774 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:20.774 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:06:20.774 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:06:20.774 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:20.774 [574/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:06:20.774 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:06:20.774 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:06:20.774 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:06:20.774 [578/743] Linking static target lib/librte_node.a 00:06:20.774 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:20.774 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:21.031 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:21.031 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.031 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:21.031 [584/743] Linking target lib/librte_node.so.23.0 00:06:21.031 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:21.031 [586/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:21.031 [587/743] Linking static target drivers/librte_bus_vdev.a 00:06:21.291 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:21.291 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:21.291 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.291 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:21.291 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:06:21.549 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:21.549 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:21.549 [595/743] Linking static target drivers/librte_bus_pci.a 00:06:21.549 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:06:21.807 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:06:21.807 [598/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.807 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:06:21.807 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:06:21.807 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:06:22.065 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:06:22.065 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:22.065 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:22.323 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:22.323 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:22.323 [607/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:06:22.323 [608/743] Linking static target drivers/librte_mempool_ring.a 00:06:22.323 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:22.323 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:06:22.887 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:06:23.145 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:06:23.145 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:06:23.145 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:06:23.710 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:06:23.968 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:06:23.968 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:06:24.226 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:06:24.226 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:06:24.483 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:06:24.483 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:06:24.483 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:06:24.483 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:06:24.740 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:06:24.998 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:06:25.658 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:06:26.223 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:06:26.223 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:06:26.223 [629/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:06:26.223 [630/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:06:26.223 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:06:26.223 [632/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:06:26.223 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:06:26.223 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:06:26.481 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:06:26.481 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:06:27.046 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:06:27.046 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:06:27.046 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:06:27.304 [640/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:06:27.304 [641/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:27.304 [642/743] Linking static target drivers/librte_net_i40e.a 00:06:27.304 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:06:27.561 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:06:27.561 [645/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:06:27.561 [646/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:27.561 [647/743] Linking static target lib/librte_vhost.a 00:06:27.561 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:06:27.818 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:06:27.818 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:06:28.074 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.074 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:06:28.331 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:06:28.331 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:06:28.597 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:06:28.597 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:06:28.597 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:06:28.881 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:28.881 [659/743] Linking target lib/librte_vhost.so.23.0 00:06:29.138 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:06:29.138 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:06:29.138 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:06:29.138 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:06:29.396 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:06:29.396 [665/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:06:29.654 [666/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:06:29.654 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:06:29.654 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:06:29.654 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:06:29.911 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:06:30.169 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:06:30.169 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:06:30.734 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:06:30.734 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:06:30.992 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:06:30.992 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:06:30.992 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:06:31.556 [678/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:06:31.556 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:06:31.556 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:06:31.556 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:06:31.556 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:06:31.814 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:06:31.814 [684/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:06:32.071 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:06:32.339 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:06:32.339 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:06:32.339 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:06:32.339 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:06:32.339 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:06:32.596 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:06:32.597 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:06:32.597 [693/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:06:32.597 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:06:33.163 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:06:33.163 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:06:33.163 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:06:33.421 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:06:33.679 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:06:34.244 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:06:34.245 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:06:34.245 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:06:34.245 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:06:34.503 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:06:34.503 [705/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:06:34.503 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:06:34.503 [707/743] Linking static target lib/librte_pipeline.a 00:06:35.068 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:06:35.069 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:06:35.069 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:06:35.326 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:06:35.327 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:06:35.327 [713/743] Linking target app/dpdk-dumpcap 00:06:35.327 [714/743] Linking target app/dpdk-pdump 00:06:35.585 [715/743] Linking target app/dpdk-proc-info 00:06:35.843 [716/743] Linking target app/dpdk-test-bbdev 00:06:35.843 [717/743] Linking target app/dpdk-test-acl 00:06:35.843 [718/743] Linking target app/dpdk-test-cmdline 00:06:35.843 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:06:35.843 [720/743] Linking target app/dpdk-test-compress-perf 00:06:35.843 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:06:36.125 [722/743] Linking target app/dpdk-test-crypto-perf 00:06:36.125 [723/743] Linking target app/dpdk-test-fib 00:06:36.125 [724/743] Linking target app/dpdk-test-eventdev 00:06:36.125 [725/743] Linking target app/dpdk-test-flow-perf 00:06:36.383 [726/743] Linking target app/dpdk-test-gpudev 00:06:36.383 [727/743] Linking target app/dpdk-test-pipeline 00:06:36.640 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:06:36.640 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:06:36.899 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:06:36.899 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:06:37.157 [732/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:37.157 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:06:37.157 [734/743] Linking target lib/librte_pipeline.so.23.0 00:06:37.157 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:06:37.415 [736/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:06:37.415 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:06:37.415 [738/743] Linking target app/dpdk-test-sad 00:06:37.674 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:06:37.674 [740/743] Linking target app/dpdk-test-regex 00:06:37.932 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:06:38.190 [742/743] Linking target app/dpdk-testpmd 00:06:38.448 [743/743] Linking target app/dpdk-test-security-perf 00:06:38.448 12:18:07 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:06:38.448 12:18:07 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:06:38.448 12:18:07 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:06:38.448 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:06:38.448 [0/1] Installing files. 00:06:38.708 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.708 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.709 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:06:38.710 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.969 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.970 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:38.971 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:38.972 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:06:38.972 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:38.972 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.232 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:39.233 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:39.233 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:39.233 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.233 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:06:39.233 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.233 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.234 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.235 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:39.236 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:06:39.236 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:06:39.236 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:06:39.236 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:06:39.236 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:06:39.236 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:06:39.236 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:06:39.236 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:06:39.236 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:06:39.236 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:06:39.236 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:06:39.236 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:06:39.236 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:06:39.236 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:06:39.236 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:06:39.236 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:06:39.236 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:06:39.236 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:06:39.236 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:06:39.236 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:06:39.236 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:06:39.236 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:06:39.236 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:06:39.236 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:06:39.236 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:06:39.236 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:06:39.236 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:06:39.236 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:06:39.236 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:06:39.236 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:06:39.236 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:06:39.236 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:06:39.236 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:06:39.236 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:06:39.236 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:06:39.236 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:06:39.236 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:06:39.236 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:06:39.236 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:06:39.236 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:06:39.236 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:06:39.236 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:06:39.236 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:06:39.236 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:06:39.236 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:06:39.236 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:06:39.236 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:06:39.236 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:06:39.236 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:06:39.236 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:06:39.236 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:06:39.236 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:06:39.236 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:06:39.236 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:06:39.236 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:06:39.236 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:06:39.236 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:06:39.236 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:06:39.236 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:06:39.236 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:06:39.236 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:06:39.236 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:06:39.236 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:06:39.236 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:06:39.236 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:06:39.236 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:06:39.236 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:06:39.236 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:06:39.236 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:06:39.236 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:06:39.236 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:06:39.236 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:06:39.236 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:06:39.236 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:06:39.236 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:06:39.236 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:06:39.236 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:06:39.236 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:06:39.236 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:06:39.236 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:06:39.236 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:06:39.236 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:06:39.236 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:06:39.237 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:06:39.237 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:06:39.237 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:06:39.237 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:06:39.237 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:06:39.237 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:06:39.237 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:06:39.237 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:06:39.237 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:06:39.237 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:06:39.237 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:06:39.237 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:06:39.237 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:06:39.237 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:06:39.237 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:06:39.237 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:06:39.237 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:06:39.237 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:06:39.237 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:06:39.237 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:06:39.237 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:06:39.237 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:06:39.237 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:06:39.237 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:06:39.237 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:06:39.237 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:06:39.237 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:06:39.237 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:06:39.237 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:06:39.237 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:06:39.237 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:06:39.237 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:06:39.237 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:06:39.237 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:06:39.237 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:06:39.237 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:39.237 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:06:39.237 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:39.237 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:06:39.237 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:39.237 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:06:39.237 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:39.237 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:06:39.495 12:18:08 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:06:39.495 12:18:08 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:39.495 00:06:39.495 real 0m51.193s 00:06:39.495 user 6m4.707s 00:06:39.495 sys 0m58.366s 00:06:39.495 12:18:08 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:06:39.495 12:18:08 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:06:39.495 ************************************ 00:06:39.495 END TEST build_native_dpdk 00:06:39.495 ************************************ 00:06:39.495 12:18:08 -- common/autotest_common.sh@1142 -- $ return 0 00:06:39.495 12:18:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:39.495 12:18:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:39.495 12:18:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:39.495 12:18:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:39.495 12:18:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:39.495 12:18:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:39.495 12:18:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:39.495 12:18:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:06:39.495 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:06:39.751 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:06:39.751 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:06:39.751 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:40.008 Using 'verbs' RDMA provider 00:06:53.242 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:08.113 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:08.113 Creating mk/config.mk...done. 00:07:08.113 Creating mk/cc.flags.mk...done. 00:07:08.113 Type 'make' to build. 00:07:08.113 12:18:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:07:08.113 12:18:35 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:07:08.113 12:18:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:07:08.113 12:18:35 -- common/autotest_common.sh@10 -- $ set +x 00:07:08.113 ************************************ 00:07:08.113 START TEST make 00:07:08.113 ************************************ 00:07:08.113 12:18:35 make -- common/autotest_common.sh@1123 -- $ make -j10 00:07:08.113 make[1]: Nothing to be done for 'all'. 00:07:30.033 CC lib/ut/ut.o 00:07:30.033 CC lib/log/log.o 00:07:30.033 CC lib/log/log_flags.o 00:07:30.033 CC lib/log/log_deprecated.o 00:07:30.033 CC lib/ut_mock/mock.o 00:07:30.033 LIB libspdk_ut.a 00:07:30.033 LIB libspdk_ut_mock.a 00:07:30.033 SO libspdk_ut.so.2.0 00:07:30.033 LIB libspdk_log.a 00:07:30.033 SO libspdk_ut_mock.so.6.0 00:07:30.033 SO libspdk_log.so.7.0 00:07:30.033 SYMLINK libspdk_ut.so 00:07:30.033 SYMLINK libspdk_ut_mock.so 00:07:30.033 SYMLINK libspdk_log.so 00:07:30.291 CC lib/ioat/ioat.o 00:07:30.291 CC lib/util/base64.o 00:07:30.291 CC lib/util/bit_array.o 00:07:30.291 CC lib/dma/dma.o 00:07:30.291 CC lib/util/cpuset.o 00:07:30.291 CC lib/util/crc16.o 00:07:30.291 CC lib/util/crc32.o 00:07:30.291 CC lib/util/crc32c.o 00:07:30.291 CXX lib/trace_parser/trace.o 00:07:30.291 CC lib/vfio_user/host/vfio_user_pci.o 00:07:30.291 CC lib/util/crc32_ieee.o 00:07:30.291 CC lib/util/crc64.o 00:07:30.291 CC lib/util/dif.o 00:07:30.291 LIB libspdk_dma.a 00:07:30.291 CC lib/util/fd.o 00:07:30.627 CC lib/util/file.o 00:07:30.627 CC lib/util/hexlify.o 00:07:30.627 SO libspdk_dma.so.4.0 00:07:30.627 LIB libspdk_ioat.a 00:07:30.627 CC lib/vfio_user/host/vfio_user.o 00:07:30.627 SYMLINK libspdk_dma.so 00:07:30.627 CC lib/util/iov.o 00:07:30.627 CC lib/util/math.o 00:07:30.627 SO libspdk_ioat.so.7.0 00:07:30.627 CC lib/util/pipe.o 00:07:30.627 CC lib/util/strerror_tls.o 00:07:30.627 CC lib/util/string.o 00:07:30.627 SYMLINK libspdk_ioat.so 00:07:30.627 CC lib/util/uuid.o 00:07:30.627 CC lib/util/fd_group.o 00:07:30.627 CC lib/util/xor.o 00:07:30.627 LIB libspdk_vfio_user.a 00:07:30.627 CC lib/util/zipf.o 00:07:30.888 SO libspdk_vfio_user.so.5.0 00:07:30.888 SYMLINK libspdk_vfio_user.so 00:07:30.888 LIB libspdk_util.a 00:07:31.145 SO libspdk_util.so.9.1 00:07:31.145 LIB libspdk_trace_parser.a 00:07:31.145 SYMLINK libspdk_util.so 00:07:31.145 SO libspdk_trace_parser.so.5.0 00:07:31.404 SYMLINK libspdk_trace_parser.so 00:07:31.404 CC lib/rdma_provider/common.o 00:07:31.404 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:31.404 CC lib/conf/conf.o 00:07:31.404 CC lib/json/json_parse.o 00:07:31.404 CC lib/vmd/vmd.o 00:07:31.404 CC lib/json/json_util.o 00:07:31.404 CC lib/vmd/led.o 00:07:31.404 CC lib/idxd/idxd.o 00:07:31.404 CC lib/env_dpdk/env.o 00:07:31.404 CC lib/rdma_utils/rdma_utils.o 00:07:31.662 CC lib/idxd/idxd_user.o 00:07:31.662 LIB libspdk_rdma_provider.a 00:07:31.662 CC lib/idxd/idxd_kernel.o 00:07:31.662 SO libspdk_rdma_provider.so.6.0 00:07:31.662 LIB libspdk_conf.a 00:07:31.662 CC lib/json/json_write.o 00:07:31.662 CC lib/env_dpdk/memory.o 00:07:31.662 SO libspdk_conf.so.6.0 00:07:31.662 SYMLINK libspdk_rdma_provider.so 00:07:31.662 LIB libspdk_rdma_utils.a 00:07:31.662 CC lib/env_dpdk/pci.o 00:07:31.662 SYMLINK libspdk_conf.so 00:07:31.662 CC lib/env_dpdk/init.o 00:07:31.662 SO libspdk_rdma_utils.so.1.0 00:07:31.662 CC lib/env_dpdk/threads.o 00:07:31.662 SYMLINK libspdk_rdma_utils.so 00:07:31.662 CC lib/env_dpdk/pci_ioat.o 00:07:31.662 CC lib/env_dpdk/pci_virtio.o 00:07:31.921 CC lib/env_dpdk/pci_vmd.o 00:07:31.921 CC lib/env_dpdk/pci_idxd.o 00:07:31.921 LIB libspdk_json.a 00:07:31.921 CC lib/env_dpdk/pci_event.o 00:07:31.921 LIB libspdk_idxd.a 00:07:31.921 SO libspdk_json.so.6.0 00:07:31.921 LIB libspdk_vmd.a 00:07:31.921 SO libspdk_idxd.so.12.0 00:07:31.921 CC lib/env_dpdk/sigbus_handler.o 00:07:31.921 SO libspdk_vmd.so.6.0 00:07:31.921 SYMLINK libspdk_json.so 00:07:31.921 CC lib/env_dpdk/pci_dpdk.o 00:07:31.921 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:32.179 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:32.179 SYMLINK libspdk_idxd.so 00:07:32.179 SYMLINK libspdk_vmd.so 00:07:32.179 CC lib/jsonrpc/jsonrpc_server.o 00:07:32.179 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:32.179 CC lib/jsonrpc/jsonrpc_client.o 00:07:32.179 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:32.438 LIB libspdk_jsonrpc.a 00:07:32.438 SO libspdk_jsonrpc.so.6.0 00:07:32.697 SYMLINK libspdk_jsonrpc.so 00:07:32.697 LIB libspdk_env_dpdk.a 00:07:32.955 SO libspdk_env_dpdk.so.14.1 00:07:32.955 CC lib/rpc/rpc.o 00:07:32.955 SYMLINK libspdk_env_dpdk.so 00:07:33.213 LIB libspdk_rpc.a 00:07:33.213 SO libspdk_rpc.so.6.0 00:07:33.213 SYMLINK libspdk_rpc.so 00:07:33.471 CC lib/keyring/keyring.o 00:07:33.471 CC lib/notify/notify.o 00:07:33.471 CC lib/notify/notify_rpc.o 00:07:33.471 CC lib/keyring/keyring_rpc.o 00:07:33.471 CC lib/trace/trace.o 00:07:33.471 CC lib/trace/trace_flags.o 00:07:33.471 CC lib/trace/trace_rpc.o 00:07:33.729 LIB libspdk_notify.a 00:07:33.730 SO libspdk_notify.so.6.0 00:07:33.730 LIB libspdk_keyring.a 00:07:33.730 SO libspdk_keyring.so.1.0 00:07:33.730 LIB libspdk_trace.a 00:07:33.730 SYMLINK libspdk_notify.so 00:07:33.730 SO libspdk_trace.so.10.0 00:07:33.730 SYMLINK libspdk_keyring.so 00:07:33.988 SYMLINK libspdk_trace.so 00:07:34.245 CC lib/thread/thread.o 00:07:34.245 CC lib/thread/iobuf.o 00:07:34.245 CC lib/sock/sock_rpc.o 00:07:34.245 CC lib/sock/sock.o 00:07:34.503 LIB libspdk_sock.a 00:07:34.761 SO libspdk_sock.so.10.0 00:07:34.761 SYMLINK libspdk_sock.so 00:07:35.055 CC lib/nvme/nvme_ctrlr.o 00:07:35.055 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:35.055 CC lib/nvme/nvme_fabric.o 00:07:35.055 CC lib/nvme/nvme_ns_cmd.o 00:07:35.055 CC lib/nvme/nvme_ns.o 00:07:35.055 CC lib/nvme/nvme_pcie_common.o 00:07:35.055 CC lib/nvme/nvme_pcie.o 00:07:35.055 CC lib/nvme/nvme.o 00:07:35.055 CC lib/nvme/nvme_qpair.o 00:07:35.642 LIB libspdk_thread.a 00:07:35.642 SO libspdk_thread.so.10.1 00:07:35.642 CC lib/nvme/nvme_quirks.o 00:07:35.900 CC lib/nvme/nvme_transport.o 00:07:35.900 CC lib/nvme/nvme_discovery.o 00:07:35.900 SYMLINK libspdk_thread.so 00:07:35.900 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:35.900 CC lib/accel/accel.o 00:07:36.166 CC lib/blob/blobstore.o 00:07:36.166 CC lib/init/json_config.o 00:07:36.166 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:36.166 CC lib/virtio/virtio.o 00:07:36.426 CC lib/init/subsystem.o 00:07:36.426 CC lib/nvme/nvme_tcp.o 00:07:36.426 CC lib/accel/accel_rpc.o 00:07:36.426 CC lib/virtio/virtio_vhost_user.o 00:07:36.426 CC lib/accel/accel_sw.o 00:07:36.684 CC lib/init/subsystem_rpc.o 00:07:36.684 CC lib/init/rpc.o 00:07:36.684 CC lib/virtio/virtio_vfio_user.o 00:07:36.684 CC lib/virtio/virtio_pci.o 00:07:36.684 LIB libspdk_init.a 00:07:36.684 CC lib/nvme/nvme_opal.o 00:07:36.684 CC lib/nvme/nvme_io_msg.o 00:07:36.684 SO libspdk_init.so.5.0 00:07:36.684 CC lib/blob/request.o 00:07:36.941 CC lib/blob/zeroes.o 00:07:36.941 SYMLINK libspdk_init.so 00:07:36.941 CC lib/blob/blob_bs_dev.o 00:07:36.941 LIB libspdk_accel.a 00:07:36.941 SO libspdk_accel.so.15.1 00:07:36.941 LIB libspdk_virtio.a 00:07:36.941 CC lib/nvme/nvme_poll_group.o 00:07:36.941 SO libspdk_virtio.so.7.0 00:07:36.941 SYMLINK libspdk_accel.so 00:07:37.200 CC lib/nvme/nvme_zns.o 00:07:37.200 SYMLINK libspdk_virtio.so 00:07:37.200 CC lib/nvme/nvme_stubs.o 00:07:37.200 CC lib/nvme/nvme_auth.o 00:07:37.200 CC lib/nvme/nvme_cuse.o 00:07:37.457 CC lib/bdev/bdev.o 00:07:37.457 CC lib/event/app.o 00:07:37.457 CC lib/event/reactor.o 00:07:37.716 CC lib/event/log_rpc.o 00:07:37.716 CC lib/nvme/nvme_rdma.o 00:07:37.716 CC lib/event/app_rpc.o 00:07:37.716 CC lib/event/scheduler_static.o 00:07:37.716 CC lib/bdev/bdev_rpc.o 00:07:37.974 CC lib/bdev/bdev_zone.o 00:07:37.974 CC lib/bdev/part.o 00:07:37.974 CC lib/bdev/scsi_nvme.o 00:07:37.974 LIB libspdk_event.a 00:07:37.974 SO libspdk_event.so.14.0 00:07:38.232 SYMLINK libspdk_event.so 00:07:39.168 LIB libspdk_nvme.a 00:07:39.168 LIB libspdk_blob.a 00:07:39.168 SO libspdk_blob.so.11.0 00:07:39.168 SO libspdk_nvme.so.13.1 00:07:39.168 SYMLINK libspdk_blob.so 00:07:39.430 CC lib/blobfs/blobfs.o 00:07:39.430 CC lib/blobfs/tree.o 00:07:39.430 CC lib/lvol/lvol.o 00:07:39.430 SYMLINK libspdk_nvme.so 00:07:39.996 LIB libspdk_bdev.a 00:07:39.996 SO libspdk_bdev.so.15.1 00:07:40.254 SYMLINK libspdk_bdev.so 00:07:40.254 LIB libspdk_blobfs.a 00:07:40.512 CC lib/ftl/ftl_core.o 00:07:40.512 CC lib/ftl/ftl_init.o 00:07:40.512 CC lib/ftl/ftl_layout.o 00:07:40.512 CC lib/nvmf/ctrlr.o 00:07:40.512 CC lib/ftl/ftl_debug.o 00:07:40.512 CC lib/scsi/dev.o 00:07:40.512 LIB libspdk_lvol.a 00:07:40.512 SO libspdk_blobfs.so.10.0 00:07:40.512 CC lib/ublk/ublk.o 00:07:40.512 CC lib/nbd/nbd.o 00:07:40.512 SO libspdk_lvol.so.10.0 00:07:40.512 SYMLINK libspdk_blobfs.so 00:07:40.512 CC lib/nbd/nbd_rpc.o 00:07:40.512 SYMLINK libspdk_lvol.so 00:07:40.512 CC lib/nvmf/ctrlr_discovery.o 00:07:40.512 CC lib/nvmf/ctrlr_bdev.o 00:07:40.771 CC lib/scsi/lun.o 00:07:40.771 CC lib/scsi/port.o 00:07:40.771 CC lib/ublk/ublk_rpc.o 00:07:40.771 CC lib/nvmf/subsystem.o 00:07:40.771 CC lib/ftl/ftl_io.o 00:07:40.771 CC lib/ftl/ftl_sb.o 00:07:40.771 LIB libspdk_nbd.a 00:07:40.771 SO libspdk_nbd.so.7.0 00:07:41.029 CC lib/nvmf/nvmf.o 00:07:41.029 SYMLINK libspdk_nbd.so 00:07:41.029 CC lib/scsi/scsi.o 00:07:41.029 CC lib/scsi/scsi_bdev.o 00:07:41.029 CC lib/scsi/scsi_pr.o 00:07:41.029 LIB libspdk_ublk.a 00:07:41.029 CC lib/ftl/ftl_l2p.o 00:07:41.029 SO libspdk_ublk.so.3.0 00:07:41.029 CC lib/nvmf/nvmf_rpc.o 00:07:41.029 CC lib/nvmf/transport.o 00:07:41.029 SYMLINK libspdk_ublk.so 00:07:41.029 CC lib/nvmf/tcp.o 00:07:41.286 CC lib/ftl/ftl_l2p_flat.o 00:07:41.286 CC lib/nvmf/stubs.o 00:07:41.286 CC lib/scsi/scsi_rpc.o 00:07:41.286 CC lib/nvmf/mdns_server.o 00:07:41.544 CC lib/ftl/ftl_nv_cache.o 00:07:41.544 CC lib/scsi/task.o 00:07:41.801 LIB libspdk_scsi.a 00:07:41.801 CC lib/nvmf/rdma.o 00:07:41.801 SO libspdk_scsi.so.9.0 00:07:41.801 CC lib/nvmf/auth.o 00:07:41.801 CC lib/ftl/ftl_band.o 00:07:41.801 SYMLINK libspdk_scsi.so 00:07:41.801 CC lib/ftl/ftl_band_ops.o 00:07:42.059 CC lib/ftl/ftl_writer.o 00:07:42.059 CC lib/ftl/ftl_rq.o 00:07:42.059 CC lib/iscsi/conn.o 00:07:42.059 CC lib/vhost/vhost.o 00:07:42.318 CC lib/ftl/ftl_reloc.o 00:07:42.318 CC lib/ftl/ftl_l2p_cache.o 00:07:42.318 CC lib/ftl/ftl_p2l.o 00:07:42.318 CC lib/ftl/mngt/ftl_mngt.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:42.576 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:42.576 CC lib/iscsi/init_grp.o 00:07:42.834 CC lib/vhost/vhost_rpc.o 00:07:42.834 CC lib/vhost/vhost_scsi.o 00:07:42.834 CC lib/vhost/vhost_blk.o 00:07:42.834 CC lib/vhost/rte_vhost_user.o 00:07:42.834 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:42.834 CC lib/iscsi/iscsi.o 00:07:42.834 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:42.834 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:43.092 CC lib/iscsi/md5.o 00:07:43.092 CC lib/iscsi/param.o 00:07:43.092 CC lib/iscsi/portal_grp.o 00:07:43.092 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:43.092 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:43.349 CC lib/iscsi/tgt_node.o 00:07:43.349 CC lib/iscsi/iscsi_subsystem.o 00:07:43.349 CC lib/iscsi/iscsi_rpc.o 00:07:43.349 CC lib/ftl/utils/ftl_conf.o 00:07:43.631 CC lib/ftl/utils/ftl_md.o 00:07:43.631 CC lib/ftl/utils/ftl_mempool.o 00:07:43.631 LIB libspdk_nvmf.a 00:07:43.631 CC lib/iscsi/task.o 00:07:43.889 CC lib/ftl/utils/ftl_bitmap.o 00:07:43.890 CC lib/ftl/utils/ftl_property.o 00:07:43.890 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:43.890 SO libspdk_nvmf.so.18.1 00:07:43.890 LIB libspdk_vhost.a 00:07:43.890 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:43.890 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:43.890 SO libspdk_vhost.so.8.0 00:07:43.890 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:44.153 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:44.153 SYMLINK libspdk_nvmf.so 00:07:44.153 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:44.153 SYMLINK libspdk_vhost.so 00:07:44.153 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:44.153 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:44.153 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:44.153 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:44.153 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:44.153 CC lib/ftl/base/ftl_base_dev.o 00:07:44.153 CC lib/ftl/base/ftl_base_bdev.o 00:07:44.153 LIB libspdk_iscsi.a 00:07:44.153 CC lib/ftl/ftl_trace.o 00:07:44.153 SO libspdk_iscsi.so.8.0 00:07:44.411 SYMLINK libspdk_iscsi.so 00:07:44.411 LIB libspdk_ftl.a 00:07:44.671 SO libspdk_ftl.so.9.0 00:07:45.238 SYMLINK libspdk_ftl.so 00:07:45.496 CC module/env_dpdk/env_dpdk_rpc.o 00:07:45.496 CC module/accel/error/accel_error.o 00:07:45.496 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:45.496 CC module/keyring/file/keyring.o 00:07:45.496 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:45.496 CC module/scheduler/gscheduler/gscheduler.o 00:07:45.496 CC module/sock/posix/posix.o 00:07:45.496 CC module/accel/dsa/accel_dsa.o 00:07:45.496 CC module/accel/ioat/accel_ioat.o 00:07:45.496 CC module/blob/bdev/blob_bdev.o 00:07:45.496 LIB libspdk_env_dpdk_rpc.a 00:07:45.496 SO libspdk_env_dpdk_rpc.so.6.0 00:07:45.496 SYMLINK libspdk_env_dpdk_rpc.so 00:07:45.496 CC module/accel/ioat/accel_ioat_rpc.o 00:07:45.496 CC module/keyring/file/keyring_rpc.o 00:07:45.496 LIB libspdk_scheduler_dpdk_governor.a 00:07:45.496 LIB libspdk_scheduler_gscheduler.a 00:07:45.754 CC module/accel/error/accel_error_rpc.o 00:07:45.754 SO libspdk_scheduler_gscheduler.so.4.0 00:07:45.754 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:45.754 LIB libspdk_scheduler_dynamic.a 00:07:45.754 SO libspdk_scheduler_dynamic.so.4.0 00:07:45.754 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:45.754 SYMLINK libspdk_scheduler_gscheduler.so 00:07:45.754 CC module/accel/dsa/accel_dsa_rpc.o 00:07:45.754 LIB libspdk_accel_ioat.a 00:07:45.754 SYMLINK libspdk_scheduler_dynamic.so 00:07:45.754 LIB libspdk_blob_bdev.a 00:07:45.754 LIB libspdk_keyring_file.a 00:07:45.754 SO libspdk_blob_bdev.so.11.0 00:07:45.754 SO libspdk_accel_ioat.so.6.0 00:07:45.754 LIB libspdk_accel_error.a 00:07:45.754 SO libspdk_keyring_file.so.1.0 00:07:45.754 SO libspdk_accel_error.so.2.0 00:07:45.754 SYMLINK libspdk_blob_bdev.so 00:07:45.754 SYMLINK libspdk_accel_ioat.so 00:07:45.754 CC module/sock/uring/uring.o 00:07:45.754 SYMLINK libspdk_keyring_file.so 00:07:45.754 LIB libspdk_accel_dsa.a 00:07:45.754 SYMLINK libspdk_accel_error.so 00:07:46.012 CC module/keyring/linux/keyring.o 00:07:46.012 CC module/keyring/linux/keyring_rpc.o 00:07:46.012 CC module/accel/iaa/accel_iaa.o 00:07:46.012 CC module/accel/iaa/accel_iaa_rpc.o 00:07:46.012 SO libspdk_accel_dsa.so.5.0 00:07:46.012 SYMLINK libspdk_accel_dsa.so 00:07:46.012 LIB libspdk_keyring_linux.a 00:07:46.012 SO libspdk_keyring_linux.so.1.0 00:07:46.012 CC module/bdev/delay/vbdev_delay.o 00:07:46.012 CC module/bdev/error/vbdev_error.o 00:07:46.012 CC module/blobfs/bdev/blobfs_bdev.o 00:07:46.012 CC module/bdev/error/vbdev_error_rpc.o 00:07:46.012 LIB libspdk_accel_iaa.a 00:07:46.270 SO libspdk_accel_iaa.so.3.0 00:07:46.270 CC module/bdev/gpt/gpt.o 00:07:46.270 SYMLINK libspdk_keyring_linux.so 00:07:46.270 LIB libspdk_sock_posix.a 00:07:46.270 CC module/bdev/lvol/vbdev_lvol.o 00:07:46.270 SO libspdk_sock_posix.so.6.0 00:07:46.270 SYMLINK libspdk_accel_iaa.so 00:07:46.270 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:46.270 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:46.270 SYMLINK libspdk_sock_posix.so 00:07:46.270 CC module/bdev/gpt/vbdev_gpt.o 00:07:46.270 CC module/bdev/malloc/bdev_malloc.o 00:07:46.528 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:46.528 LIB libspdk_bdev_error.a 00:07:46.528 CC module/bdev/null/bdev_null.o 00:07:46.528 SO libspdk_bdev_error.so.6.0 00:07:46.528 LIB libspdk_blobfs_bdev.a 00:07:46.528 LIB libspdk_bdev_delay.a 00:07:46.528 SO libspdk_blobfs_bdev.so.6.0 00:07:46.528 CC module/bdev/nvme/bdev_nvme.o 00:07:46.528 LIB libspdk_sock_uring.a 00:07:46.528 SO libspdk_bdev_delay.so.6.0 00:07:46.528 SYMLINK libspdk_bdev_error.so 00:07:46.528 SO libspdk_sock_uring.so.5.0 00:07:46.528 SYMLINK libspdk_blobfs_bdev.so 00:07:46.528 SYMLINK libspdk_bdev_delay.so 00:07:46.528 CC module/bdev/null/bdev_null_rpc.o 00:07:46.528 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:46.528 SYMLINK libspdk_sock_uring.so 00:07:46.528 CC module/bdev/nvme/nvme_rpc.o 00:07:46.528 LIB libspdk_bdev_gpt.a 00:07:46.786 SO libspdk_bdev_gpt.so.6.0 00:07:46.786 LIB libspdk_bdev_malloc.a 00:07:46.786 CC module/bdev/passthru/vbdev_passthru.o 00:07:46.786 CC module/bdev/nvme/bdev_mdns_client.o 00:07:46.786 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:46.786 SO libspdk_bdev_malloc.so.6.0 00:07:46.786 SYMLINK libspdk_bdev_gpt.so 00:07:46.786 CC module/bdev/raid/bdev_raid.o 00:07:46.786 CC module/bdev/raid/bdev_raid_rpc.o 00:07:46.786 LIB libspdk_bdev_null.a 00:07:46.786 SYMLINK libspdk_bdev_malloc.so 00:07:46.786 CC module/bdev/raid/bdev_raid_sb.o 00:07:46.786 SO libspdk_bdev_null.so.6.0 00:07:46.786 CC module/bdev/raid/raid0.o 00:07:46.786 SYMLINK libspdk_bdev_null.so 00:07:46.786 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:47.044 CC module/bdev/raid/raid1.o 00:07:47.044 CC module/bdev/raid/concat.o 00:07:47.044 LIB libspdk_bdev_passthru.a 00:07:47.044 CC module/bdev/split/vbdev_split.o 00:07:47.044 LIB libspdk_bdev_lvol.a 00:07:47.044 SO libspdk_bdev_passthru.so.6.0 00:07:47.044 SO libspdk_bdev_lvol.so.6.0 00:07:47.302 CC module/bdev/split/vbdev_split_rpc.o 00:07:47.302 CC module/bdev/nvme/vbdev_opal.o 00:07:47.302 SYMLINK libspdk_bdev_passthru.so 00:07:47.302 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:47.302 SYMLINK libspdk_bdev_lvol.so 00:07:47.302 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:47.302 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:47.302 LIB libspdk_bdev_split.a 00:07:47.302 SO libspdk_bdev_split.so.6.0 00:07:47.561 CC module/bdev/uring/bdev_uring.o 00:07:47.561 CC module/bdev/ftl/bdev_ftl.o 00:07:47.561 CC module/bdev/aio/bdev_aio.o 00:07:47.561 SYMLINK libspdk_bdev_split.so 00:07:47.561 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:47.561 CC module/bdev/uring/bdev_uring_rpc.o 00:07:47.561 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:47.561 CC module/bdev/iscsi/bdev_iscsi.o 00:07:47.561 LIB libspdk_bdev_zone_block.a 00:07:47.561 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:47.820 LIB libspdk_bdev_raid.a 00:07:47.820 SO libspdk_bdev_zone_block.so.6.0 00:07:47.820 SO libspdk_bdev_raid.so.6.0 00:07:47.820 SYMLINK libspdk_bdev_zone_block.so 00:07:47.820 CC module/bdev/aio/bdev_aio_rpc.o 00:07:47.820 LIB libspdk_bdev_ftl.a 00:07:47.820 LIB libspdk_bdev_uring.a 00:07:47.820 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:47.820 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:47.820 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:47.820 SO libspdk_bdev_ftl.so.6.0 00:07:47.820 SO libspdk_bdev_uring.so.6.0 00:07:47.820 SYMLINK libspdk_bdev_raid.so 00:07:47.820 SYMLINK libspdk_bdev_uring.so 00:07:47.820 SYMLINK libspdk_bdev_ftl.so 00:07:48.078 LIB libspdk_bdev_aio.a 00:07:48.078 SO libspdk_bdev_aio.so.6.0 00:07:48.078 LIB libspdk_bdev_iscsi.a 00:07:48.078 SO libspdk_bdev_iscsi.so.6.0 00:07:48.078 SYMLINK libspdk_bdev_aio.so 00:07:48.078 SYMLINK libspdk_bdev_iscsi.so 00:07:48.335 LIB libspdk_bdev_virtio.a 00:07:48.335 SO libspdk_bdev_virtio.so.6.0 00:07:48.593 SYMLINK libspdk_bdev_virtio.so 00:07:48.852 LIB libspdk_bdev_nvme.a 00:07:48.852 SO libspdk_bdev_nvme.so.7.0 00:07:48.852 SYMLINK libspdk_bdev_nvme.so 00:07:49.417 CC module/event/subsystems/iobuf/iobuf.o 00:07:49.417 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:49.417 CC module/event/subsystems/keyring/keyring.o 00:07:49.417 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:49.417 CC module/event/subsystems/scheduler/scheduler.o 00:07:49.417 CC module/event/subsystems/sock/sock.o 00:07:49.417 CC module/event/subsystems/vmd/vmd.o 00:07:49.417 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:49.675 LIB libspdk_event_scheduler.a 00:07:49.675 LIB libspdk_event_keyring.a 00:07:49.675 LIB libspdk_event_vmd.a 00:07:49.675 LIB libspdk_event_sock.a 00:07:49.675 SO libspdk_event_keyring.so.1.0 00:07:49.675 SO libspdk_event_scheduler.so.4.0 00:07:49.675 LIB libspdk_event_vhost_blk.a 00:07:49.675 LIB libspdk_event_iobuf.a 00:07:49.675 SO libspdk_event_vmd.so.6.0 00:07:49.675 SO libspdk_event_vhost_blk.so.3.0 00:07:49.675 SO libspdk_event_sock.so.5.0 00:07:49.675 SYMLINK libspdk_event_scheduler.so 00:07:49.675 SO libspdk_event_iobuf.so.3.0 00:07:49.675 SYMLINK libspdk_event_keyring.so 00:07:49.675 SYMLINK libspdk_event_vhost_blk.so 00:07:49.675 SYMLINK libspdk_event_vmd.so 00:07:49.675 SYMLINK libspdk_event_sock.so 00:07:49.675 SYMLINK libspdk_event_iobuf.so 00:07:49.932 CC module/event/subsystems/accel/accel.o 00:07:50.190 LIB libspdk_event_accel.a 00:07:50.190 SO libspdk_event_accel.so.6.0 00:07:50.190 SYMLINK libspdk_event_accel.so 00:07:50.757 CC module/event/subsystems/bdev/bdev.o 00:07:50.757 LIB libspdk_event_bdev.a 00:07:51.014 SO libspdk_event_bdev.so.6.0 00:07:51.014 SYMLINK libspdk_event_bdev.so 00:07:51.272 CC module/event/subsystems/scsi/scsi.o 00:07:51.272 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:51.272 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:51.272 CC module/event/subsystems/nbd/nbd.o 00:07:51.272 CC module/event/subsystems/ublk/ublk.o 00:07:51.272 LIB libspdk_event_ublk.a 00:07:51.272 LIB libspdk_event_nbd.a 00:07:51.623 LIB libspdk_event_scsi.a 00:07:51.623 SO libspdk_event_nbd.so.6.0 00:07:51.623 SO libspdk_event_ublk.so.3.0 00:07:51.623 SO libspdk_event_scsi.so.6.0 00:07:51.623 SYMLINK libspdk_event_nbd.so 00:07:51.623 LIB libspdk_event_nvmf.a 00:07:51.623 SYMLINK libspdk_event_ublk.so 00:07:51.623 SYMLINK libspdk_event_scsi.so 00:07:51.623 SO libspdk_event_nvmf.so.6.0 00:07:51.623 SYMLINK libspdk_event_nvmf.so 00:07:51.881 CC module/event/subsystems/iscsi/iscsi.o 00:07:51.881 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:51.881 LIB libspdk_event_vhost_scsi.a 00:07:51.881 LIB libspdk_event_iscsi.a 00:07:51.881 SO libspdk_event_vhost_scsi.so.3.0 00:07:51.881 SO libspdk_event_iscsi.so.6.0 00:07:52.139 SYMLINK libspdk_event_vhost_scsi.so 00:07:52.139 SYMLINK libspdk_event_iscsi.so 00:07:52.139 SO libspdk.so.6.0 00:07:52.139 SYMLINK libspdk.so 00:07:52.397 CC app/trace_record/trace_record.o 00:07:52.397 CC app/spdk_lspci/spdk_lspci.o 00:07:52.397 CXX app/trace/trace.o 00:07:52.397 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:52.397 CC app/nvmf_tgt/nvmf_main.o 00:07:52.654 CC app/iscsi_tgt/iscsi_tgt.o 00:07:52.654 CC app/spdk_tgt/spdk_tgt.o 00:07:52.654 CC examples/util/zipf/zipf.o 00:07:52.654 CC examples/ioat/perf/perf.o 00:07:52.654 CC test/thread/poller_perf/poller_perf.o 00:07:52.654 LINK spdk_lspci 00:07:52.654 LINK nvmf_tgt 00:07:52.654 LINK interrupt_tgt 00:07:52.654 LINK zipf 00:07:52.654 LINK spdk_trace_record 00:07:52.654 LINK poller_perf 00:07:52.912 LINK iscsi_tgt 00:07:52.912 LINK spdk_tgt 00:07:52.912 LINK ioat_perf 00:07:52.912 CC app/spdk_nvme_perf/perf.o 00:07:52.912 LINK spdk_trace 00:07:52.912 CC app/spdk_nvme_identify/identify.o 00:07:52.912 CC examples/ioat/verify/verify.o 00:07:53.169 CC app/spdk_nvme_discover/discovery_aer.o 00:07:53.169 CC app/spdk_top/spdk_top.o 00:07:53.169 CC app/spdk_dd/spdk_dd.o 00:07:53.169 CC test/dma/test_dma/test_dma.o 00:07:53.169 CC app/fio/nvme/fio_plugin.o 00:07:53.169 CC examples/thread/thread/thread_ex.o 00:07:53.169 LINK verify 00:07:53.169 LINK spdk_nvme_discover 00:07:53.427 CC examples/sock/hello_world/hello_sock.o 00:07:53.427 LINK thread 00:07:53.684 LINK test_dma 00:07:53.684 LINK hello_sock 00:07:53.684 CC examples/vmd/lsvmd/lsvmd.o 00:07:53.684 LINK spdk_dd 00:07:53.684 CC examples/idxd/perf/perf.o 00:07:53.684 LINK lsvmd 00:07:53.684 LINK spdk_nvme 00:07:53.684 LINK spdk_nvme_identify 00:07:53.941 LINK spdk_nvme_perf 00:07:53.941 CC app/fio/bdev/fio_plugin.o 00:07:53.941 LINK spdk_top 00:07:53.941 CC app/vhost/vhost.o 00:07:53.941 CC examples/accel/perf/accel_perf.o 00:07:53.941 CC examples/vmd/led/led.o 00:07:53.941 CC test/app/bdev_svc/bdev_svc.o 00:07:53.941 LINK idxd_perf 00:07:54.198 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:54.198 LINK led 00:07:54.198 CC examples/nvme/hello_world/hello_world.o 00:07:54.198 LINK vhost 00:07:54.198 CC examples/blob/hello_world/hello_blob.o 00:07:54.198 LINK bdev_svc 00:07:54.198 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:54.456 CC test/blobfs/mkfs/mkfs.o 00:07:54.456 LINK spdk_bdev 00:07:54.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:54.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:54.456 LINK hello_world 00:07:54.456 LINK accel_perf 00:07:54.456 LINK hello_blob 00:07:54.456 LINK nvme_fuzz 00:07:54.456 LINK mkfs 00:07:54.456 CC examples/nvme/reconnect/reconnect.o 00:07:54.715 CC test/app/histogram_perf/histogram_perf.o 00:07:54.715 TEST_HEADER include/spdk/accel.h 00:07:54.715 TEST_HEADER include/spdk/accel_module.h 00:07:54.715 TEST_HEADER include/spdk/assert.h 00:07:54.715 TEST_HEADER include/spdk/barrier.h 00:07:54.715 TEST_HEADER include/spdk/base64.h 00:07:54.715 TEST_HEADER include/spdk/bdev.h 00:07:54.715 TEST_HEADER include/spdk/bdev_module.h 00:07:54.715 TEST_HEADER include/spdk/bdev_zone.h 00:07:54.715 TEST_HEADER include/spdk/bit_array.h 00:07:54.715 TEST_HEADER include/spdk/bit_pool.h 00:07:54.715 TEST_HEADER include/spdk/blob_bdev.h 00:07:54.715 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:54.715 TEST_HEADER include/spdk/blobfs.h 00:07:54.715 TEST_HEADER include/spdk/blob.h 00:07:54.715 TEST_HEADER include/spdk/conf.h 00:07:54.715 TEST_HEADER include/spdk/config.h 00:07:54.715 TEST_HEADER include/spdk/cpuset.h 00:07:54.715 TEST_HEADER include/spdk/crc16.h 00:07:54.715 TEST_HEADER include/spdk/crc32.h 00:07:54.715 TEST_HEADER include/spdk/crc64.h 00:07:54.715 TEST_HEADER include/spdk/dif.h 00:07:54.715 TEST_HEADER include/spdk/dma.h 00:07:54.715 TEST_HEADER include/spdk/endian.h 00:07:54.715 TEST_HEADER include/spdk/env_dpdk.h 00:07:54.715 TEST_HEADER include/spdk/env.h 00:07:54.715 TEST_HEADER include/spdk/event.h 00:07:54.715 TEST_HEADER include/spdk/fd_group.h 00:07:54.715 TEST_HEADER include/spdk/fd.h 00:07:54.715 TEST_HEADER include/spdk/file.h 00:07:54.715 TEST_HEADER include/spdk/ftl.h 00:07:54.715 TEST_HEADER include/spdk/gpt_spec.h 00:07:54.715 TEST_HEADER include/spdk/hexlify.h 00:07:54.715 TEST_HEADER include/spdk/histogram_data.h 00:07:54.715 TEST_HEADER include/spdk/idxd.h 00:07:54.715 TEST_HEADER include/spdk/idxd_spec.h 00:07:54.715 TEST_HEADER include/spdk/init.h 00:07:54.715 TEST_HEADER include/spdk/ioat.h 00:07:54.715 CC examples/blob/cli/blobcli.o 00:07:54.715 TEST_HEADER include/spdk/ioat_spec.h 00:07:54.715 TEST_HEADER include/spdk/iscsi_spec.h 00:07:54.715 TEST_HEADER include/spdk/json.h 00:07:54.715 TEST_HEADER include/spdk/jsonrpc.h 00:07:54.715 TEST_HEADER include/spdk/keyring.h 00:07:54.715 TEST_HEADER include/spdk/keyring_module.h 00:07:54.715 TEST_HEADER include/spdk/likely.h 00:07:54.715 TEST_HEADER include/spdk/log.h 00:07:54.715 TEST_HEADER include/spdk/lvol.h 00:07:54.715 CC test/app/jsoncat/jsoncat.o 00:07:54.715 TEST_HEADER include/spdk/memory.h 00:07:54.715 TEST_HEADER include/spdk/mmio.h 00:07:54.715 TEST_HEADER include/spdk/nbd.h 00:07:54.715 TEST_HEADER include/spdk/notify.h 00:07:54.715 TEST_HEADER include/spdk/nvme.h 00:07:54.715 TEST_HEADER include/spdk/nvme_intel.h 00:07:54.715 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:54.715 LINK histogram_perf 00:07:54.715 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:54.715 TEST_HEADER include/spdk/nvme_spec.h 00:07:54.715 TEST_HEADER include/spdk/nvme_zns.h 00:07:54.715 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:54.715 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:54.715 TEST_HEADER include/spdk/nvmf.h 00:07:54.715 CC test/app/stub/stub.o 00:07:54.715 TEST_HEADER include/spdk/nvmf_spec.h 00:07:54.715 TEST_HEADER include/spdk/nvmf_transport.h 00:07:54.715 TEST_HEADER include/spdk/opal.h 00:07:54.715 TEST_HEADER include/spdk/opal_spec.h 00:07:54.715 TEST_HEADER include/spdk/pci_ids.h 00:07:54.715 TEST_HEADER include/spdk/pipe.h 00:07:54.715 TEST_HEADER include/spdk/queue.h 00:07:54.715 TEST_HEADER include/spdk/reduce.h 00:07:54.715 TEST_HEADER include/spdk/rpc.h 00:07:54.715 TEST_HEADER include/spdk/scheduler.h 00:07:54.715 TEST_HEADER include/spdk/scsi.h 00:07:54.715 TEST_HEADER include/spdk/scsi_spec.h 00:07:54.715 TEST_HEADER include/spdk/sock.h 00:07:54.715 LINK vhost_fuzz 00:07:54.715 TEST_HEADER include/spdk/stdinc.h 00:07:54.715 TEST_HEADER include/spdk/string.h 00:07:54.715 TEST_HEADER include/spdk/thread.h 00:07:54.716 TEST_HEADER include/spdk/trace.h 00:07:54.716 TEST_HEADER include/spdk/trace_parser.h 00:07:54.716 TEST_HEADER include/spdk/tree.h 00:07:54.716 TEST_HEADER include/spdk/ublk.h 00:07:54.716 TEST_HEADER include/spdk/util.h 00:07:54.716 TEST_HEADER include/spdk/uuid.h 00:07:54.973 TEST_HEADER include/spdk/version.h 00:07:54.973 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:54.973 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:54.973 TEST_HEADER include/spdk/vhost.h 00:07:54.973 TEST_HEADER include/spdk/vmd.h 00:07:54.973 TEST_HEADER include/spdk/xor.h 00:07:54.973 TEST_HEADER include/spdk/zipf.h 00:07:54.973 CXX test/cpp_headers/accel.o 00:07:54.973 CC test/env/mem_callbacks/mem_callbacks.o 00:07:54.973 CC test/event/event_perf/event_perf.o 00:07:54.973 LINK jsoncat 00:07:54.973 LINK reconnect 00:07:54.973 LINK stub 00:07:54.973 CC test/event/reactor/reactor.o 00:07:54.973 CXX test/cpp_headers/accel_module.o 00:07:54.973 LINK event_perf 00:07:54.973 LINK mem_callbacks 00:07:55.231 CC test/event/reactor_perf/reactor_perf.o 00:07:55.231 LINK reactor 00:07:55.231 CXX test/cpp_headers/assert.o 00:07:55.231 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:55.231 CC test/lvol/esnap/esnap.o 00:07:55.231 CXX test/cpp_headers/barrier.o 00:07:55.231 LINK blobcli 00:07:55.231 CC examples/nvme/arbitration/arbitration.o 00:07:55.231 CXX test/cpp_headers/base64.o 00:07:55.231 LINK reactor_perf 00:07:55.231 CC test/env/vtophys/vtophys.o 00:07:55.489 CXX test/cpp_headers/bdev.o 00:07:55.489 CXX test/cpp_headers/bdev_module.o 00:07:55.489 LINK vtophys 00:07:55.489 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:55.489 CC test/env/memory/memory_ut.o 00:07:55.489 CC test/event/app_repeat/app_repeat.o 00:07:55.747 LINK arbitration 00:07:55.747 LINK env_dpdk_post_init 00:07:55.747 LINK nvme_manage 00:07:55.747 CXX test/cpp_headers/bdev_zone.o 00:07:55.747 CC test/rpc_client/rpc_client_test.o 00:07:55.747 LINK app_repeat 00:07:55.747 CC test/nvme/aer/aer.o 00:07:56.005 CC examples/nvme/hotplug/hotplug.o 00:07:56.005 LINK iscsi_fuzz 00:07:56.005 CXX test/cpp_headers/bit_array.o 00:07:56.005 LINK rpc_client_test 00:07:56.005 CC test/event/scheduler/scheduler.o 00:07:56.005 CC test/nvme/reset/reset.o 00:07:56.005 CC examples/bdev/hello_world/hello_bdev.o 00:07:56.262 LINK hotplug 00:07:56.262 CXX test/cpp_headers/bit_pool.o 00:07:56.262 LINK aer 00:07:56.262 CC test/nvme/sgl/sgl.o 00:07:56.262 LINK scheduler 00:07:56.262 CC test/nvme/e2edp/nvme_dp.o 00:07:56.262 LINK memory_ut 00:07:56.262 CXX test/cpp_headers/blob_bdev.o 00:07:56.262 LINK hello_bdev 00:07:56.262 LINK reset 00:07:56.523 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:56.523 CC examples/nvme/abort/abort.o 00:07:56.523 LINK sgl 00:07:56.523 CXX test/cpp_headers/blobfs_bdev.o 00:07:56.523 LINK nvme_dp 00:07:56.523 CC test/nvme/overhead/overhead.o 00:07:56.523 CC test/env/pci/pci_ut.o 00:07:56.523 LINK cmb_copy 00:07:56.523 CC test/nvme/err_injection/err_injection.o 00:07:56.781 CC examples/bdev/bdevperf/bdevperf.o 00:07:56.781 CXX test/cpp_headers/blobfs.o 00:07:56.781 CC test/nvme/startup/startup.o 00:07:56.781 LINK abort 00:07:56.781 CC test/nvme/reserve/reserve.o 00:07:56.781 LINK err_injection 00:07:56.781 LINK overhead 00:07:56.781 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:57.039 CXX test/cpp_headers/blob.o 00:07:57.039 LINK startup 00:07:57.039 LINK pci_ut 00:07:57.039 LINK reserve 00:07:57.039 LINK pmr_persistence 00:07:57.040 CC test/nvme/simple_copy/simple_copy.o 00:07:57.040 CXX test/cpp_headers/conf.o 00:07:57.040 CXX test/cpp_headers/config.o 00:07:57.040 CC test/nvme/connect_stress/connect_stress.o 00:07:57.040 CXX test/cpp_headers/cpuset.o 00:07:57.298 CC test/accel/dif/dif.o 00:07:57.298 CC test/nvme/boot_partition/boot_partition.o 00:07:57.298 CXX test/cpp_headers/crc16.o 00:07:57.298 CC test/nvme/fused_ordering/fused_ordering.o 00:07:57.298 LINK connect_stress 00:07:57.298 CC test/nvme/compliance/nvme_compliance.o 00:07:57.298 LINK simple_copy 00:07:57.298 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:57.556 LINK bdevperf 00:07:57.556 CXX test/cpp_headers/crc32.o 00:07:57.556 LINK boot_partition 00:07:57.556 CXX test/cpp_headers/crc64.o 00:07:57.556 CXX test/cpp_headers/dif.o 00:07:57.556 LINK fused_ordering 00:07:57.556 LINK doorbell_aers 00:07:57.556 LINK dif 00:07:57.556 LINK nvme_compliance 00:07:57.556 CXX test/cpp_headers/dma.o 00:07:57.814 CXX test/cpp_headers/endian.o 00:07:57.814 CXX test/cpp_headers/env_dpdk.o 00:07:57.814 CC test/nvme/fdp/fdp.o 00:07:57.814 CC test/nvme/cuse/cuse.o 00:07:57.814 CXX test/cpp_headers/env.o 00:07:57.814 CXX test/cpp_headers/event.o 00:07:57.814 CXX test/cpp_headers/fd_group.o 00:07:57.814 CXX test/cpp_headers/fd.o 00:07:57.814 CXX test/cpp_headers/file.o 00:07:57.814 CC examples/nvmf/nvmf/nvmf.o 00:07:57.814 CXX test/cpp_headers/ftl.o 00:07:57.814 CXX test/cpp_headers/gpt_spec.o 00:07:58.072 CXX test/cpp_headers/hexlify.o 00:07:58.072 CXX test/cpp_headers/histogram_data.o 00:07:58.072 CXX test/cpp_headers/idxd.o 00:07:58.072 LINK fdp 00:07:58.072 CXX test/cpp_headers/idxd_spec.o 00:07:58.072 CXX test/cpp_headers/init.o 00:07:58.072 CXX test/cpp_headers/ioat.o 00:07:58.072 CXX test/cpp_headers/ioat_spec.o 00:07:58.072 LINK nvmf 00:07:58.072 CXX test/cpp_headers/iscsi_spec.o 00:07:58.330 CC test/bdev/bdevio/bdevio.o 00:07:58.330 CXX test/cpp_headers/json.o 00:07:58.330 CXX test/cpp_headers/jsonrpc.o 00:07:58.330 CXX test/cpp_headers/keyring.o 00:07:58.330 CXX test/cpp_headers/keyring_module.o 00:07:58.330 CXX test/cpp_headers/likely.o 00:07:58.330 CXX test/cpp_headers/log.o 00:07:58.330 CXX test/cpp_headers/lvol.o 00:07:58.330 CXX test/cpp_headers/memory.o 00:07:58.330 CXX test/cpp_headers/mmio.o 00:07:58.330 CXX test/cpp_headers/nbd.o 00:07:58.587 CXX test/cpp_headers/notify.o 00:07:58.587 CXX test/cpp_headers/nvme.o 00:07:58.587 CXX test/cpp_headers/nvme_intel.o 00:07:58.587 CXX test/cpp_headers/nvme_ocssd.o 00:07:58.587 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:58.587 CXX test/cpp_headers/nvme_spec.o 00:07:58.587 LINK bdevio 00:07:58.587 CXX test/cpp_headers/nvme_zns.o 00:07:58.587 CXX test/cpp_headers/nvmf_cmd.o 00:07:58.587 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:58.587 CXX test/cpp_headers/nvmf.o 00:07:58.845 CXX test/cpp_headers/nvmf_spec.o 00:07:58.845 CXX test/cpp_headers/nvmf_transport.o 00:07:58.845 CXX test/cpp_headers/opal.o 00:07:58.845 CXX test/cpp_headers/opal_spec.o 00:07:58.845 CXX test/cpp_headers/pci_ids.o 00:07:58.845 CXX test/cpp_headers/pipe.o 00:07:58.845 CXX test/cpp_headers/queue.o 00:07:58.845 CXX test/cpp_headers/reduce.o 00:07:58.845 CXX test/cpp_headers/rpc.o 00:07:58.845 CXX test/cpp_headers/scheduler.o 00:07:59.103 CXX test/cpp_headers/scsi.o 00:07:59.103 CXX test/cpp_headers/scsi_spec.o 00:07:59.103 CXX test/cpp_headers/sock.o 00:07:59.103 LINK cuse 00:07:59.103 CXX test/cpp_headers/stdinc.o 00:07:59.103 CXX test/cpp_headers/string.o 00:07:59.103 CXX test/cpp_headers/thread.o 00:07:59.103 CXX test/cpp_headers/trace.o 00:07:59.103 CXX test/cpp_headers/trace_parser.o 00:07:59.103 CXX test/cpp_headers/tree.o 00:07:59.103 CXX test/cpp_headers/ublk.o 00:07:59.103 CXX test/cpp_headers/util.o 00:07:59.103 CXX test/cpp_headers/uuid.o 00:07:59.103 CXX test/cpp_headers/version.o 00:07:59.103 CXX test/cpp_headers/vfio_user_pci.o 00:07:59.103 CXX test/cpp_headers/vfio_user_spec.o 00:07:59.103 CXX test/cpp_headers/vhost.o 00:07:59.376 CXX test/cpp_headers/vmd.o 00:07:59.376 CXX test/cpp_headers/xor.o 00:07:59.376 CXX test/cpp_headers/zipf.o 00:08:00.309 LINK esnap 00:08:00.569 00:08:00.569 real 0m54.212s 00:08:00.569 user 5m0.932s 00:08:00.569 sys 1m6.531s 00:08:00.569 12:19:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:08:00.569 12:19:29 make -- common/autotest_common.sh@10 -- $ set +x 00:08:00.569 ************************************ 00:08:00.569 END TEST make 00:08:00.569 ************************************ 00:08:00.569 12:19:29 -- common/autotest_common.sh@1142 -- $ return 0 00:08:00.569 12:19:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:00.569 12:19:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:00.569 12:19:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:00.569 12:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:00.569 12:19:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:00.569 12:19:29 -- pm/common@44 -- $ pid=5935 00:08:00.569 12:19:29 -- pm/common@50 -- $ kill -TERM 5935 00:08:00.569 12:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:00.569 12:19:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:00.569 12:19:29 -- pm/common@44 -- $ pid=5937 00:08:00.569 12:19:29 -- pm/common@50 -- $ kill -TERM 5937 00:08:00.569 12:19:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:00.569 12:19:29 -- nvmf/common.sh@7 -- # uname -s 00:08:00.569 12:19:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.569 12:19:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.569 12:19:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.569 12:19:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.569 12:19:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.569 12:19:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.569 12:19:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.569 12:19:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.569 12:19:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.827 12:19:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.827 12:19:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:08:00.827 12:19:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:08:00.827 12:19:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.827 12:19:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.827 12:19:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:00.827 12:19:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.827 12:19:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.827 12:19:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.827 12:19:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.827 12:19:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.827 12:19:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.827 12:19:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.827 12:19:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.827 12:19:29 -- paths/export.sh@5 -- # export PATH 00:08:00.827 12:19:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.827 12:19:29 -- nvmf/common.sh@47 -- # : 0 00:08:00.827 12:19:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.827 12:19:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.827 12:19:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.827 12:19:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.827 12:19:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.827 12:19:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.827 12:19:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.827 12:19:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.827 12:19:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:00.827 12:19:29 -- spdk/autotest.sh@32 -- # uname -s 00:08:00.827 12:19:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:00.827 12:19:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:00.827 12:19:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:00.827 12:19:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:00.827 12:19:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:00.827 12:19:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:00.827 12:19:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:00.827 12:19:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:00.827 12:19:29 -- spdk/autotest.sh@48 -- # udevadm_pid=64973 00:08:00.827 12:19:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:00.827 12:19:29 -- pm/common@17 -- # local monitor 00:08:00.827 12:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:00.827 12:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:00.827 12:19:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:00.827 12:19:29 -- pm/common@25 -- # sleep 1 00:08:00.827 12:19:29 -- pm/common@21 -- # date +%s 00:08:00.827 12:19:29 -- pm/common@21 -- # date +%s 00:08:00.827 12:19:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720786769 00:08:00.827 12:19:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720786769 00:08:00.827 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720786769_collect-vmstat.pm.log 00:08:00.827 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720786769_collect-cpu-load.pm.log 00:08:01.763 12:19:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:01.763 12:19:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:01.763 12:19:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.763 12:19:30 -- common/autotest_common.sh@10 -- # set +x 00:08:01.763 12:19:30 -- spdk/autotest.sh@59 -- # create_test_list 00:08:01.763 12:19:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:08:01.763 12:19:30 -- common/autotest_common.sh@10 -- # set +x 00:08:01.763 12:19:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:01.763 12:19:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:01.763 12:19:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:01.763 12:19:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:01.763 12:19:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:01.763 12:19:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:01.763 12:19:30 -- common/autotest_common.sh@1455 -- # uname 00:08:01.763 12:19:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:01.763 12:19:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:01.763 12:19:30 -- common/autotest_common.sh@1475 -- # uname 00:08:01.763 12:19:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:01.763 12:19:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:08:01.763 12:19:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:08:01.763 12:19:30 -- spdk/autotest.sh@72 -- # hash lcov 00:08:01.764 12:19:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:01.764 12:19:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:08:01.764 --rc lcov_branch_coverage=1 00:08:01.764 --rc lcov_function_coverage=1 00:08:01.764 --rc genhtml_branch_coverage=1 00:08:01.764 --rc genhtml_function_coverage=1 00:08:01.764 --rc genhtml_legend=1 00:08:01.764 --rc geninfo_all_blocks=1 00:08:01.764 ' 00:08:01.764 12:19:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:08:01.764 --rc lcov_branch_coverage=1 00:08:01.764 --rc lcov_function_coverage=1 00:08:01.764 --rc genhtml_branch_coverage=1 00:08:01.764 --rc genhtml_function_coverage=1 00:08:01.764 --rc genhtml_legend=1 00:08:01.764 --rc geninfo_all_blocks=1 00:08:01.764 ' 00:08:01.764 12:19:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:08:01.764 --rc lcov_branch_coverage=1 00:08:01.764 --rc lcov_function_coverage=1 00:08:01.764 --rc genhtml_branch_coverage=1 00:08:01.764 --rc genhtml_function_coverage=1 00:08:01.764 --rc genhtml_legend=1 00:08:01.764 --rc geninfo_all_blocks=1 00:08:01.764 --no-external' 00:08:01.764 12:19:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:08:01.764 --rc lcov_branch_coverage=1 00:08:01.764 --rc lcov_function_coverage=1 00:08:01.764 --rc genhtml_branch_coverage=1 00:08:01.764 --rc genhtml_function_coverage=1 00:08:01.764 --rc genhtml_legend=1 00:08:01.764 --rc geninfo_all_blocks=1 00:08:01.764 --no-external' 00:08:01.764 12:19:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:08:02.022 lcov: LCOV version 1.14 00:08:02.022 12:19:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:16.905 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:16.905 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:08:29.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:08:29.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:08:29.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:08:29.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:08:29.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:08:29.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:08:32.446 12:20:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:08:32.446 12:20:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.446 12:20:01 -- common/autotest_common.sh@10 -- # set +x 00:08:32.446 12:20:01 -- spdk/autotest.sh@91 -- # rm -f 00:08:32.446 12:20:01 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:33.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:33.271 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:33.271 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:33.271 12:20:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:08:33.271 12:20:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:33.271 12:20:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:33.271 12:20:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:33.271 12:20:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.271 12:20:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:33.271 12:20:02 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:33.271 12:20:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.271 12:20:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:33.271 12:20:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:33.271 12:20:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.271 12:20:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:08:33.271 12:20:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:08:33.271 12:20:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.271 12:20:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:08:33.271 12:20:02 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:08:33.271 12:20:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:33.271 12:20:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.271 12:20:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:08:33.271 12:20:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:33.271 12:20:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:33.271 12:20:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:08:33.271 12:20:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:08:33.271 12:20:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:33.271 No valid GPT data, bailing 00:08:33.271 12:20:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:33.271 12:20:02 -- scripts/common.sh@391 -- # pt= 00:08:33.271 12:20:02 -- scripts/common.sh@392 -- # return 1 00:08:33.271 12:20:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:33.271 1+0 records in 00:08:33.271 1+0 records out 00:08:33.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456073 s, 230 MB/s 00:08:33.271 12:20:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:33.271 12:20:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:33.271 12:20:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:08:33.271 12:20:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:08:33.271 12:20:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:33.271 No valid GPT data, bailing 00:08:33.271 12:20:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:33.271 12:20:02 -- scripts/common.sh@391 -- # pt= 00:08:33.271 12:20:02 -- scripts/common.sh@392 -- # return 1 00:08:33.271 12:20:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:33.271 1+0 records in 00:08:33.271 1+0 records out 00:08:33.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0034911 s, 300 MB/s 00:08:33.271 12:20:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:33.271 12:20:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:33.271 12:20:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:08:33.271 12:20:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:08:33.271 12:20:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:33.530 No valid GPT data, bailing 00:08:33.530 12:20:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:33.530 12:20:02 -- scripts/common.sh@391 -- # pt= 00:08:33.530 12:20:02 -- scripts/common.sh@392 -- # return 1 00:08:33.530 12:20:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:33.530 1+0 records in 00:08:33.530 1+0 records out 00:08:33.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377348 s, 278 MB/s 00:08:33.530 12:20:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:33.530 12:20:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:33.530 12:20:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:08:33.530 12:20:02 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:08:33.530 12:20:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:33.530 No valid GPT data, bailing 00:08:33.530 12:20:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:33.530 12:20:02 -- scripts/common.sh@391 -- # pt= 00:08:33.530 12:20:02 -- scripts/common.sh@392 -- # return 1 00:08:33.530 12:20:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:33.530 1+0 records in 00:08:33.530 1+0 records out 00:08:33.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456641 s, 230 MB/s 00:08:33.530 12:20:02 -- spdk/autotest.sh@118 -- # sync 00:08:33.530 12:20:02 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:33.530 12:20:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:33.530 12:20:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:35.472 12:20:04 -- spdk/autotest.sh@124 -- # uname -s 00:08:35.472 12:20:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:08:35.472 12:20:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:35.472 12:20:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.472 12:20:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.472 12:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:35.472 ************************************ 00:08:35.472 START TEST setup.sh 00:08:35.472 ************************************ 00:08:35.472 12:20:04 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:35.472 * Looking for test storage... 00:08:35.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:35.472 12:20:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:08:35.472 12:20:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:35.472 12:20:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:35.472 12:20:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:35.472 12:20:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.472 12:20:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:35.733 ************************************ 00:08:35.733 START TEST acl 00:08:35.733 ************************************ 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:35.733 * Looking for test storage... 00:08:35.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:35.733 12:20:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:35.733 12:20:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.733 12:20:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:08:35.733 12:20:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:08:35.733 12:20:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:08:35.733 12:20:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:08:35.733 12:20:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:08:35.733 12:20:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:35.733 12:20:04 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:36.670 12:20:05 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:08:36.670 12:20:05 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:08:36.670 12:20:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.670 12:20:05 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:08:36.670 12:20:05 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:08:36.670 12:20:05 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:37.262 Hugepages 00:08:37.262 node hugesize free / total 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:37.262 00:08:37.262 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:08:37.262 12:20:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:08:37.262 12:20:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:37.262 12:20:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.262 12:20:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:37.262 ************************************ 00:08:37.262 START TEST denied 00:08:37.262 ************************************ 00:08:37.262 12:20:06 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:08:37.262 12:20:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:08:37.262 12:20:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:08:37.263 12:20:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:08:37.263 12:20:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:08:37.263 12:20:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:38.199 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:38.199 12:20:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:38.766 00:08:38.766 real 0m1.409s 00:08:38.766 user 0m0.581s 00:08:38.766 sys 0m0.774s 00:08:38.766 12:20:07 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.766 12:20:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:08:38.766 ************************************ 00:08:38.766 END TEST denied 00:08:38.766 ************************************ 00:08:38.766 12:20:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:08:38.766 12:20:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:38.766 12:20:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:38.766 12:20:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.766 12:20:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:38.766 ************************************ 00:08:38.766 START TEST allowed 00:08:38.766 ************************************ 00:08:38.766 12:20:07 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:08:38.766 12:20:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:08:38.766 12:20:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:08:38.766 12:20:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:08:38.766 12:20:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:08:38.766 12:20:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:39.701 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.701 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:08:39.701 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:39.702 12:20:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.268 00:08:40.268 real 0m1.495s 00:08:40.268 user 0m0.662s 00:08:40.268 sys 0m0.819s 00:08:40.268 12:20:09 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.268 ************************************ 00:08:40.268 END TEST allowed 00:08:40.268 ************************************ 00:08:40.268 12:20:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:08:40.268 12:20:09 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:08:40.268 00:08:40.268 real 0m4.754s 00:08:40.268 user 0m2.103s 00:08:40.268 sys 0m2.585s 00:08:40.268 12:20:09 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.268 12:20:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:40.268 ************************************ 00:08:40.268 END TEST acl 00:08:40.268 ************************************ 00:08:40.268 12:20:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:40.268 12:20:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:40.268 12:20:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.268 12:20:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.268 12:20:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:40.529 ************************************ 00:08:40.529 START TEST hugepages 00:08:40.529 ************************************ 00:08:40.529 12:20:09 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:40.529 * Looking for test storage... 00:08:40.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 4862620 kB' 'MemAvailable: 7379336 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 435540 kB' 'Inactive: 2392972 kB' 'Active(anon): 114640 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392972 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105520 kB' 'Mapped: 48756 kB' 'Shmem: 10492 kB' 'KReclaimable: 80012 kB' 'Slab: 157372 kB' 'SReclaimable: 80012 kB' 'SUnreclaim: 77360 kB' 'KernelStack: 6636 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.529 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.530 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:40.531 12:20:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:40.531 12:20:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.531 12:20:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.531 12:20:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:40.531 ************************************ 00:08:40.531 START TEST default_setup 00:08:40.531 ************************************ 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:08:40.531 12:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:41.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:41.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6870232 kB' 'MemAvailable: 9386840 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452264 kB' 'Inactive: 2392984 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122492 kB' 'Mapped: 48892 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157056 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77280 kB' 'KernelStack: 6512 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.362 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.363 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6870232 kB' 'MemAvailable: 9386840 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452432 kB' 'Inactive: 2392984 kB' 'Active(anon): 131532 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122664 kB' 'Mapped: 48892 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157056 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77280 kB' 'KernelStack: 6528 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.364 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6870616 kB' 'MemAvailable: 9387232 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452256 kB' 'Inactive: 2392992 kB' 'Active(anon): 131356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122480 kB' 'Mapped: 48892 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157044 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77268 kB' 'KernelStack: 6480 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.365 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.366 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:08:41.367 nr_hugepages=1024 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:41.367 resv_hugepages=0 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:41.367 surplus_hugepages=0 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:41.367 anon_hugepages=0 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6871532 kB' 'MemAvailable: 9388148 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452464 kB' 'Inactive: 2392992 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122668 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157040 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77264 kB' 'KernelStack: 6512 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.367 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.368 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6871532 kB' 'MemUsed: 5370436 kB' 'SwapCached: 0 kB' 'Active: 452320 kB' 'Inactive: 2392992 kB' 'Active(anon): 131420 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 2724360 kB' 'Mapped: 48764 kB' 'AnonPages: 122572 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79776 kB' 'Slab: 157040 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.369 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:41.629 node0=1024 expecting 1024 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:41.629 00:08:41.629 real 0m0.968s 00:08:41.629 user 0m0.437s 00:08:41.629 sys 0m0.467s 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.629 12:20:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:08:41.629 ************************************ 00:08:41.629 END TEST default_setup 00:08:41.629 ************************************ 00:08:41.629 12:20:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:08:41.629 12:20:10 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:41.629 12:20:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:41.629 12:20:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.629 12:20:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:41.629 ************************************ 00:08:41.629 START TEST per_node_1G_alloc 00:08:41.629 ************************************ 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:41.629 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:41.630 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:41.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.891 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.891 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7928380 kB' 'MemAvailable: 10444996 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 453000 kB' 'Inactive: 2392992 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48952 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157032 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77256 kB' 'KernelStack: 6532 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.891 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7928380 kB' 'MemAvailable: 10444996 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452340 kB' 'Inactive: 2392992 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122512 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157020 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77244 kB' 'KernelStack: 6512 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.892 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.893 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7928380 kB' 'MemAvailable: 10444996 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 451992 kB' 'Inactive: 2392992 kB' 'Active(anon): 131092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122212 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157016 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77240 kB' 'KernelStack: 6496 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.894 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.895 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.896 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:42.157 nr_hugepages=512 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:42.157 resv_hugepages=0 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:42.157 surplus_hugepages=0 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:42.157 anon_hugepages=0 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7928380 kB' 'MemAvailable: 10444996 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452204 kB' 'Inactive: 2392992 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79776 kB' 'Slab: 157016 kB' 'SReclaimable: 79776 kB' 'SUnreclaim: 77240 kB' 'KernelStack: 6480 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.157 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:42.158 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7928140 kB' 'MemUsed: 4313828 kB' 'SwapCached: 0 kB' 'Active: 452340 kB' 'Inactive: 2392992 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2724360 kB' 'Mapped: 48768 kB' 'AnonPages: 122552 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79792 kB' 'Slab: 157032 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.159 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:42.160 node0=512 expecting 512 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:42.160 00:08:42.160 real 0m0.527s 00:08:42.160 user 0m0.252s 00:08:42.160 sys 0m0.309s 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.160 12:20:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:42.160 ************************************ 00:08:42.160 END TEST per_node_1G_alloc 00:08:42.160 ************************************ 00:08:42.160 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:08:42.160 12:20:11 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:42.160 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.160 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.160 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:42.160 ************************************ 00:08:42.160 START TEST even_2G_alloc 00:08:42.160 ************************************ 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:42.160 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.419 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.419 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.419 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6876292 kB' 'MemAvailable: 9392916 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452728 kB' 'Inactive: 2392992 kB' 'Active(anon): 131828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157032 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77240 kB' 'KernelStack: 6500 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.420 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.683 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6876040 kB' 'MemAvailable: 9392664 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452508 kB' 'Inactive: 2392992 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122720 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157036 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77244 kB' 'KernelStack: 6512 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.684 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.685 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6876040 kB' 'MemAvailable: 9392664 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452336 kB' 'Inactive: 2392992 kB' 'Active(anon): 131436 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122584 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157036 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77244 kB' 'KernelStack: 6528 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.686 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.687 nr_hugepages=1024 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:42.687 resv_hugepages=0 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:42.687 surplus_hugepages=0 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:42.687 anon_hugepages=0 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.687 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6876040 kB' 'MemAvailable: 9392664 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452544 kB' 'Inactive: 2392992 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122852 kB' 'Mapped: 49288 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157036 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77244 kB' 'KernelStack: 6544 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 354464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.688 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.689 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6876488 kB' 'MemUsed: 5365480 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 2392992 kB' 'Active(anon): 131428 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2724360 kB' 'Mapped: 48768 kB' 'AnonPages: 122672 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79792 kB' 'Slab: 157040 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.690 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:42.691 node0=1024 expecting 1024 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:42.691 00:08:42.691 real 0m0.582s 00:08:42.691 user 0m0.293s 00:08:42.691 sys 0m0.275s 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.691 ************************************ 00:08:42.691 END TEST even_2G_alloc 00:08:42.691 ************************************ 00:08:42.691 12:20:11 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:42.691 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:08:42.691 12:20:11 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:42.691 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:42.691 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.691 12:20:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:42.691 ************************************ 00:08:42.691 START TEST odd_alloc 00:08:42.691 ************************************ 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:42.691 12:20:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.263 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.263 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:43.263 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6890228 kB' 'MemAvailable: 9406852 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452856 kB' 'Inactive: 2392992 kB' 'Active(anon): 131956 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123116 kB' 'Mapped: 48948 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157056 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77264 kB' 'KernelStack: 6532 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.264 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6890228 kB' 'MemAvailable: 9406852 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452364 kB' 'Inactive: 2392992 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157016 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6512 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.265 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.266 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6890228 kB' 'MemAvailable: 9406852 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452172 kB' 'Inactive: 2392992 kB' 'Active(anon): 131272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122344 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157016 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6512 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.267 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:43.268 nr_hugepages=1025 00:08:43.268 resv_hugepages=0 00:08:43.268 surplus_hugepages=0 00:08:43.268 anon_hugepages=0 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:43.268 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6890228 kB' 'MemAvailable: 9406852 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452316 kB' 'Inactive: 2392992 kB' 'Active(anon): 131416 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122488 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157004 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77212 kB' 'KernelStack: 6512 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.269 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6890228 kB' 'MemUsed: 5351740 kB' 'SwapCached: 0 kB' 'Active: 452448 kB' 'Inactive: 2392992 kB' 'Active(anon): 131548 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2724360 kB' 'Mapped: 48768 kB' 'AnonPages: 122632 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79792 kB' 'Slab: 157004 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.270 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:43.271 node0=1025 expecting 1025 00:08:43.271 ************************************ 00:08:43.271 END TEST odd_alloc 00:08:43.271 ************************************ 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:43.271 00:08:43.271 real 0m0.553s 00:08:43.271 user 0m0.268s 00:08:43.271 sys 0m0.281s 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.271 12:20:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:43.271 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:08:43.271 12:20:12 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:43.271 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.271 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.271 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:43.271 ************************************ 00:08:43.271 START TEST custom_alloc 00:08:43.271 ************************************ 00:08:43.271 12:20:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:08:43.271 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:08:43.271 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:08:43.271 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:43.271 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:43.272 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.844 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.844 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7943984 kB' 'MemAvailable: 10460608 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452788 kB' 'Inactive: 2392992 kB' 'Active(anon): 131888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123032 kB' 'Mapped: 48944 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 156992 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77200 kB' 'KernelStack: 6548 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.844 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.845 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7943984 kB' 'MemAvailable: 10460608 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452388 kB' 'Inactive: 2392992 kB' 'Active(anon): 131488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122604 kB' 'Mapped: 48944 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 156992 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77200 kB' 'KernelStack: 6516 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.846 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.847 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7953384 kB' 'MemAvailable: 10470008 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452084 kB' 'Inactive: 2392992 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122356 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 156992 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77200 kB' 'KernelStack: 6528 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.848 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.849 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:43.850 nr_hugepages=512 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:43.850 resv_hugepages=0 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:43.850 surplus_hugepages=0 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:43.850 anon_hugepages=0 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7953384 kB' 'MemAvailable: 10470008 kB' 'Buffers: 2436 kB' 'Cached: 2721924 kB' 'SwapCached: 0 kB' 'Active: 452120 kB' 'Inactive: 2392992 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122656 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 156992 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77200 kB' 'KernelStack: 6544 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.850 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:43.851 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7953620 kB' 'MemUsed: 4288348 kB' 'SwapCached: 0 kB' 'Active: 452016 kB' 'Inactive: 2392992 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 2724360 kB' 'Mapped: 48768 kB' 'AnonPages: 122508 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79792 kB' 'Slab: 156992 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.852 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:43.853 node0=512 expecting 512 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:43.853 00:08:43.853 real 0m0.538s 00:08:43.853 user 0m0.269s 00:08:43.853 sys 0m0.301s 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.853 12:20:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:43.853 ************************************ 00:08:43.853 END TEST custom_alloc 00:08:43.853 ************************************ 00:08:43.853 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:08:43.853 12:20:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:43.853 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:43.853 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.853 12:20:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:43.853 ************************************ 00:08:43.853 START TEST no_shrink_alloc 00:08:43.853 ************************************ 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:43.853 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:44.112 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:44.112 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:44.112 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:44.112 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:44.112 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:44.112 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:44.113 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:44.113 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:08:44.113 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:08:44.113 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.113 12:20:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:44.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:44.374 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:44.374 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6967396 kB' 'MemAvailable: 9484024 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 452912 kB' 'Inactive: 2392996 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 48880 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 156996 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77204 kB' 'KernelStack: 6516 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.374 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.375 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6967396 kB' 'MemAvailable: 9484024 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 2392996 kB' 'Active(anon): 131484 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122828 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 156988 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77196 kB' 'KernelStack: 6468 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.376 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.377 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6967396 kB' 'MemAvailable: 9484024 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 452172 kB' 'Inactive: 2392996 kB' 'Active(anon): 131272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122644 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157008 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77216 kB' 'KernelStack: 6512 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.378 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:44.379 nr_hugepages=1024 00:08:44.379 resv_hugepages=0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:44.379 surplus_hugepages=0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:44.379 anon_hugepages=0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:44.379 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6967396 kB' 'MemAvailable: 9484024 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 452120 kB' 'Inactive: 2392996 kB' 'Active(anon): 131220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 48768 kB' 'Shmem: 10468 kB' 'KReclaimable: 79792 kB' 'Slab: 157008 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77216 kB' 'KernelStack: 6528 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.380 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6968440 kB' 'MemUsed: 5273528 kB' 'SwapCached: 0 kB' 'Active: 447820 kB' 'Inactive: 2392996 kB' 'Active(anon): 126920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 2724364 kB' 'Mapped: 48148 kB' 'AnonPages: 118360 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79792 kB' 'Slab: 156948 kB' 'SReclaimable: 79792 kB' 'SUnreclaim: 77156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.381 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.382 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.640 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:44.641 node0=1024 expecting 1024 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.641 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:44.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:44.903 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:44.903 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:44.903 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.903 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6971052 kB' 'MemAvailable: 9487680 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 448660 kB' 'Inactive: 2392996 kB' 'Active(anon): 127760 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118928 kB' 'Mapped: 48244 kB' 'Shmem: 10468 kB' 'KReclaimable: 79788 kB' 'Slab: 156808 kB' 'SReclaimable: 79788 kB' 'SUnreclaim: 77020 kB' 'KernelStack: 6484 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.904 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6971052 kB' 'MemAvailable: 9487680 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 447916 kB' 'Inactive: 2392996 kB' 'Active(anon): 127016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118132 kB' 'Mapped: 48028 kB' 'Shmem: 10468 kB' 'KReclaimable: 79788 kB' 'Slab: 156804 kB' 'SReclaimable: 79788 kB' 'SUnreclaim: 77016 kB' 'KernelStack: 6464 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.905 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.906 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6971052 kB' 'MemAvailable: 9487680 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 448144 kB' 'Inactive: 2392996 kB' 'Active(anon): 127244 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118372 kB' 'Mapped: 48028 kB' 'Shmem: 10468 kB' 'KReclaimable: 79788 kB' 'Slab: 156800 kB' 'SReclaimable: 79788 kB' 'SUnreclaim: 77012 kB' 'KernelStack: 6448 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.907 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.908 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.909 nr_hugepages=1024 00:08:44.909 resv_hugepages=0 00:08:44.909 surplus_hugepages=0 00:08:44.909 anon_hugepages=0 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6971304 kB' 'MemAvailable: 9487932 kB' 'Buffers: 2436 kB' 'Cached: 2721928 kB' 'SwapCached: 0 kB' 'Active: 447788 kB' 'Inactive: 2392996 kB' 'Active(anon): 126888 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118024 kB' 'Mapped: 48028 kB' 'Shmem: 10468 kB' 'KReclaimable: 79788 kB' 'Slab: 156800 kB' 'SReclaimable: 79788 kB' 'SUnreclaim: 77012 kB' 'KernelStack: 6416 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.909 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.910 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6971824 kB' 'MemUsed: 5270144 kB' 'SwapCached: 0 kB' 'Active: 448088 kB' 'Inactive: 2392996 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2392996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2724364 kB' 'Mapped: 48028 kB' 'AnonPages: 118336 kB' 'Shmem: 10468 kB' 'KernelStack: 6432 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79788 kB' 'Slab: 156800 kB' 'SReclaimable: 79788 kB' 'SUnreclaim: 77012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.911 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.170 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:45.171 node0=1024 expecting 1024 00:08:45.171 ************************************ 00:08:45.171 END TEST no_shrink_alloc 00:08:45.171 ************************************ 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:45.171 00:08:45.171 real 0m1.077s 00:08:45.171 user 0m0.577s 00:08:45.171 sys 0m0.553s 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.171 12:20:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:45.171 12:20:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:45.171 12:20:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:45.171 ************************************ 00:08:45.171 END TEST hugepages 00:08:45.171 ************************************ 00:08:45.171 00:08:45.171 real 0m4.689s 00:08:45.171 user 0m2.257s 00:08:45.171 sys 0m2.445s 00:08:45.171 12:20:14 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.171 12:20:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:45.171 12:20:14 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:45.171 12:20:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:45.171 12:20:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.171 12:20:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.171 12:20:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:45.171 ************************************ 00:08:45.171 START TEST driver 00:08:45.171 ************************************ 00:08:45.171 12:20:14 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:45.171 * Looking for test storage... 00:08:45.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:45.171 12:20:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:08:45.171 12:20:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:45.171 12:20:14 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:45.806 12:20:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:45.806 12:20:14 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:45.806 12:20:14 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.806 12:20:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:08:45.806 ************************************ 00:08:45.806 START TEST guess_driver 00:08:45.806 ************************************ 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:08:45.806 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:45.806 Looking for driver=uio_pci_generic 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:08:45.806 12:20:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:46.372 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:46.372 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:08:46.372 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:46.629 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:46.629 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:46.629 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:46.629 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:46.629 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:46.629 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:46.630 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:46.630 12:20:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:08:46.630 12:20:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:46.630 12:20:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.194 00:08:47.194 real 0m1.447s 00:08:47.194 user 0m0.556s 00:08:47.194 sys 0m0.900s 00:08:47.194 ************************************ 00:08:47.194 END TEST guess_driver 00:08:47.194 ************************************ 00:08:47.194 12:20:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.194 12:20:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:08:47.194 12:20:16 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:08:47.194 ************************************ 00:08:47.194 END TEST driver 00:08:47.194 ************************************ 00:08:47.194 00:08:47.194 real 0m2.156s 00:08:47.194 user 0m0.778s 00:08:47.194 sys 0m1.441s 00:08:47.194 12:20:16 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.194 12:20:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:08:47.451 12:20:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:47.451 12:20:16 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:47.451 12:20:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:47.451 12:20:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.451 12:20:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:47.451 ************************************ 00:08:47.451 START TEST devices 00:08:47.451 ************************************ 00:08:47.451 12:20:16 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:47.451 * Looking for test storage... 00:08:47.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:47.451 12:20:16 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:47.451 12:20:16 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:08:47.451 12:20:16 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:47.451 12:20:16 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:48.384 12:20:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:08:48.384 12:20:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:48.384 12:20:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:48.384 12:20:17 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:48.384 12:20:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:48.384 12:20:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:48.385 No valid GPT data, bailing 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:08:48.385 No valid GPT data, bailing 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:08:48.385 No valid GPT data, bailing 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:08:48.385 No valid GPT data, bailing 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:48.385 12:20:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:08:48.385 12:20:17 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:48.385 12:20:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.385 12:20:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:48.385 ************************************ 00:08:48.385 START TEST nvme_mount 00:08:48.385 ************************************ 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:48.385 12:20:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:49.760 Creating new GPT entries in memory. 00:08:49.760 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:49.760 other utilities. 00:08:49.760 12:20:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:49.760 12:20:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:49.760 12:20:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:49.760 12:20:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:49.760 12:20:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:50.693 Creating new GPT entries in memory. 00:08:50.693 The operation has completed successfully. 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 69166 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.693 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.951 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.951 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.951 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.951 12:20:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.951 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.209 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:51.209 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:51.209 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:51.209 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:51.209 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:51.468 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:51.468 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:51.468 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:51.468 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:51.468 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.726 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:51.985 12:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:52.243 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:52.501 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:52.501 00:08:52.501 real 0m3.942s 00:08:52.501 user 0m0.689s 00:08:52.501 sys 0m0.987s 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.501 12:20:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:08:52.501 ************************************ 00:08:52.501 END TEST nvme_mount 00:08:52.501 ************************************ 00:08:52.501 12:20:21 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:52.501 12:20:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:52.501 12:20:21 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:52.501 12:20:21 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.501 12:20:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:52.501 ************************************ 00:08:52.501 START TEST dm_mount 00:08:52.501 ************************************ 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:52.501 12:20:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:53.437 Creating new GPT entries in memory. 00:08:53.437 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:53.437 other utilities. 00:08:53.437 12:20:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:53.437 12:20:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:53.437 12:20:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:53.437 12:20:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:53.437 12:20:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:54.481 Creating new GPT entries in memory. 00:08:54.481 The operation has completed successfully. 00:08:54.481 12:20:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:54.481 12:20:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:54.481 12:20:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:54.481 12:20:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:54.481 12:20:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:55.854 The operation has completed successfully. 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 69599 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:55.854 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:55.855 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.112 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:56.112 12:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:56.112 12:20:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:56.369 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:56.627 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:56.627 00:08:56.627 real 0m4.174s 00:08:56.627 user 0m0.437s 00:08:56.627 sys 0m0.699s 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.627 12:20:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:08:56.627 ************************************ 00:08:56.627 END TEST dm_mount 00:08:56.627 ************************************ 00:08:56.627 12:20:25 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:56.627 12:20:25 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:56.884 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:56.884 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:56.884 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:56.884 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:56.884 12:20:25 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:56.884 00:08:56.884 real 0m9.639s 00:08:56.884 user 0m1.780s 00:08:56.884 sys 0m2.267s 00:08:56.884 12:20:25 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.884 12:20:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:56.884 ************************************ 00:08:56.884 END TEST devices 00:08:56.884 ************************************ 00:08:57.142 12:20:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:08:57.142 00:08:57.142 real 0m21.525s 00:08:57.142 user 0m7.016s 00:08:57.142 sys 0m8.917s 00:08:57.142 12:20:25 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.142 12:20:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:57.142 ************************************ 00:08:57.142 END TEST setup.sh 00:08:57.142 ************************************ 00:08:57.142 12:20:26 -- common/autotest_common.sh@1142 -- # return 0 00:08:57.142 12:20:26 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:57.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.708 Hugepages 00:08:57.708 node hugesize free / total 00:08:57.708 node0 1048576kB 0 / 0 00:08:57.708 node0 2048kB 2048 / 2048 00:08:57.708 00:08:57.708 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:57.708 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:57.708 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:57.966 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:57.966 12:20:26 -- spdk/autotest.sh@130 -- # uname -s 00:08:57.966 12:20:26 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:57.966 12:20:26 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:57.966 12:20:26 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:58.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.531 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.789 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.789 12:20:27 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:59.725 12:20:28 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:59.725 12:20:28 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:59.725 12:20:28 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:59.725 12:20:28 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:59.725 12:20:28 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:59.725 12:20:28 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:59.725 12:20:28 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:59.725 12:20:28 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:59.725 12:20:28 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:59.725 12:20:28 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:08:59.725 12:20:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:59.725 12:20:28 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:00.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.291 Waiting for block devices as requested 00:09:00.291 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.291 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.291 12:20:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:09:00.291 12:20:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:09:00.291 12:20:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:09:00.291 12:20:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:09:00.291 12:20:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:09:00.291 12:20:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:09:00.291 12:20:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:09:00.291 12:20:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1557 -- # continue 00:09:00.291 12:20:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:09:00.291 12:20:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:09:00.291 12:20:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:00.291 12:20:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:09:00.291 12:20:29 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:09:00.291 12:20:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:09:00.291 12:20:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:09:00.291 12:20:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:09:00.291 12:20:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:09:00.549 12:20:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:09:00.549 12:20:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:09:00.549 12:20:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:09:00.549 12:20:29 -- common/autotest_common.sh@1557 -- # continue 00:09:00.549 12:20:29 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:09:00.549 12:20:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:00.549 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 12:20:29 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:09:00.549 12:20:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.549 12:20:29 -- common/autotest_common.sh@10 -- # set +x 00:09:00.549 12:20:29 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:01.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:01.204 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.204 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:01.204 12:20:30 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:09:01.204 12:20:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.204 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.463 12:20:30 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:09:01.463 12:20:30 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:09:01.463 12:20:30 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:09:01.463 12:20:30 -- common/autotest_common.sh@1577 -- # bdfs=() 00:09:01.463 12:20:30 -- common/autotest_common.sh@1577 -- # local bdfs 00:09:01.463 12:20:30 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:09:01.463 12:20:30 -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:01.463 12:20:30 -- common/autotest_common.sh@1513 -- # local bdfs 00:09:01.463 12:20:30 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.463 12:20:30 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:01.463 12:20:30 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.463 12:20:30 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:09:01.463 12:20:30 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:01.463 12:20:30 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:09:01.463 12:20:30 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:01.463 12:20:30 -- common/autotest_common.sh@1580 -- # device=0x0010 00:09:01.463 12:20:30 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:01.463 12:20:30 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:09:01.463 12:20:30 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:01.463 12:20:30 -- common/autotest_common.sh@1580 -- # device=0x0010 00:09:01.463 12:20:30 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:01.463 12:20:30 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:09:01.463 12:20:30 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:09:01.463 12:20:30 -- common/autotest_common.sh@1593 -- # return 0 00:09:01.463 12:20:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:09:01.463 12:20:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:01.463 12:20:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:01.463 12:20:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:01.463 12:20:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:01.464 12:20:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.464 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.464 12:20:30 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:09:01.464 12:20:30 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:09:01.464 12:20:30 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:09:01.464 12:20:30 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:01.464 12:20:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.464 12:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.464 12:20:30 -- common/autotest_common.sh@10 -- # set +x 00:09:01.464 ************************************ 00:09:01.464 START TEST env 00:09:01.464 ************************************ 00:09:01.464 12:20:30 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:01.464 * Looking for test storage... 00:09:01.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:01.464 12:20:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:01.464 12:20:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.464 12:20:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.464 12:20:30 env -- common/autotest_common.sh@10 -- # set +x 00:09:01.464 ************************************ 00:09:01.464 START TEST env_memory 00:09:01.464 ************************************ 00:09:01.464 12:20:30 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:01.464 00:09:01.464 00:09:01.464 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.464 http://cunit.sourceforge.net/ 00:09:01.464 00:09:01.464 00:09:01.464 Suite: memory 00:09:01.464 Test: alloc and free memory map ...[2024-07-12 12:20:30.541908] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:01.723 passed 00:09:01.723 Test: mem map translation ...[2024-07-12 12:20:30.572684] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:01.723 [2024-07-12 12:20:30.572724] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:01.724 [2024-07-12 12:20:30.572780] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:01.724 [2024-07-12 12:20:30.572800] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:01.724 passed 00:09:01.724 Test: mem map registration ...[2024-07-12 12:20:30.636542] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:01.724 [2024-07-12 12:20:30.636589] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:01.724 passed 00:09:01.724 Test: mem map adjacent registrations ...passed 00:09:01.724 00:09:01.724 Run Summary: Type Total Ran Passed Failed Inactive 00:09:01.724 suites 1 1 n/a 0 0 00:09:01.724 tests 4 4 4 0 0 00:09:01.724 asserts 152 152 152 0 n/a 00:09:01.724 00:09:01.724 Elapsed time = 0.214 seconds 00:09:01.724 00:09:01.724 real 0m0.229s 00:09:01.724 user 0m0.215s 00:09:01.724 sys 0m0.012s 00:09:01.724 12:20:30 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.724 12:20:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:01.724 ************************************ 00:09:01.724 END TEST env_memory 00:09:01.724 ************************************ 00:09:01.724 12:20:30 env -- common/autotest_common.sh@1142 -- # return 0 00:09:01.724 12:20:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:01.724 12:20:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.724 12:20:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.724 12:20:30 env -- common/autotest_common.sh@10 -- # set +x 00:09:01.724 ************************************ 00:09:01.724 START TEST env_vtophys 00:09:01.724 ************************************ 00:09:01.724 12:20:30 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:01.724 EAL: lib.eal log level changed from notice to debug 00:09:01.724 EAL: Detected lcore 0 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 1 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 2 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 3 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 4 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 5 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 6 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 7 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 8 as core 0 on socket 0 00:09:01.724 EAL: Detected lcore 9 as core 0 on socket 0 00:09:01.724 EAL: Maximum logical cores by configuration: 128 00:09:01.724 EAL: Detected CPU lcores: 10 00:09:01.724 EAL: Detected NUMA nodes: 1 00:09:01.724 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:01.724 EAL: Detected shared linkage of DPDK 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:09:01.724 EAL: Registered [vdev] bus. 00:09:01.724 EAL: bus.vdev log level changed from disabled to notice 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:09:01.724 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:09:01.724 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:09:01.724 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:09:01.724 EAL: No shared files mode enabled, IPC will be disabled 00:09:01.724 EAL: No shared files mode enabled, IPC is disabled 00:09:01.724 EAL: Selected IOVA mode 'PA' 00:09:01.724 EAL: Probing VFIO support... 00:09:01.724 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:01.724 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:01.724 EAL: Ask a virtual area of 0x2e000 bytes 00:09:01.724 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:01.724 EAL: Setting up physically contiguous memory... 00:09:01.724 EAL: Setting maximum number of open files to 524288 00:09:01.724 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:01.724 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:01.724 EAL: Ask a virtual area of 0x61000 bytes 00:09:01.724 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:01.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:01.724 EAL: Ask a virtual area of 0x400000000 bytes 00:09:01.724 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:01.724 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:01.724 EAL: Ask a virtual area of 0x61000 bytes 00:09:01.724 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:01.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:01.724 EAL: Ask a virtual area of 0x400000000 bytes 00:09:01.724 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:01.724 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:01.724 EAL: Ask a virtual area of 0x61000 bytes 00:09:01.724 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:01.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:01.724 EAL: Ask a virtual area of 0x400000000 bytes 00:09:01.724 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:01.724 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:01.724 EAL: Ask a virtual area of 0x61000 bytes 00:09:01.724 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:01.724 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:01.724 EAL: Ask a virtual area of 0x400000000 bytes 00:09:01.724 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:01.724 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:01.724 EAL: Hugepages will be freed exactly as allocated. 00:09:01.724 EAL: No shared files mode enabled, IPC is disabled 00:09:01.724 EAL: No shared files mode enabled, IPC is disabled 00:09:01.983 EAL: TSC frequency is ~2200000 KHz 00:09:01.983 EAL: Main lcore 0 is ready (tid=7fec5244ba00;cpuset=[0]) 00:09:01.983 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 0 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 2MB 00:09:01.984 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:01.984 EAL: Mem event callback 'spdk:(nil)' registered 00:09:01.984 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:01.984 00:09:01.984 00:09:01.984 CUnit - A unit testing framework for C - Version 2.1-3 00:09:01.984 http://cunit.sourceforge.net/ 00:09:01.984 00:09:01.984 00:09:01.984 Suite: components_suite 00:09:01.984 Test: vtophys_malloc_test ...passed 00:09:01.984 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 4MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was shrunk by 4MB 00:09:01.984 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 6MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was shrunk by 6MB 00:09:01.984 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 10MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was shrunk by 10MB 00:09:01.984 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 18MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was shrunk by 18MB 00:09:01.984 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 34MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was shrunk by 34MB 00:09:01.984 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 66MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was shrunk by 66MB 00:09:01.984 EAL: Trying to obtain current memory policy. 00:09:01.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.984 EAL: Restoring previous memory policy: 4 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.984 EAL: request: mp_malloc_sync 00:09:01.984 EAL: No shared files mode enabled, IPC is disabled 00:09:01.984 EAL: Heap on socket 0 was expanded by 130MB 00:09:01.984 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.242 EAL: request: mp_malloc_sync 00:09:02.242 EAL: No shared files mode enabled, IPC is disabled 00:09:02.242 EAL: Heap on socket 0 was shrunk by 130MB 00:09:02.242 EAL: Trying to obtain current memory policy. 00:09:02.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:02.242 EAL: Restoring previous memory policy: 4 00:09:02.242 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.242 EAL: request: mp_malloc_sync 00:09:02.242 EAL: No shared files mode enabled, IPC is disabled 00:09:02.242 EAL: Heap on socket 0 was expanded by 258MB 00:09:02.242 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.242 EAL: request: mp_malloc_sync 00:09:02.242 EAL: No shared files mode enabled, IPC is disabled 00:09:02.242 EAL: Heap on socket 0 was shrunk by 258MB 00:09:02.242 EAL: Trying to obtain current memory policy. 00:09:02.242 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:02.501 EAL: Restoring previous memory policy: 4 00:09:02.501 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.501 EAL: request: mp_malloc_sync 00:09:02.501 EAL: No shared files mode enabled, IPC is disabled 00:09:02.501 EAL: Heap on socket 0 was expanded by 514MB 00:09:02.501 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.501 EAL: request: mp_malloc_sync 00:09:02.501 EAL: No shared files mode enabled, IPC is disabled 00:09:02.501 EAL: Heap on socket 0 was shrunk by 514MB 00:09:02.501 EAL: Trying to obtain current memory policy. 00:09:02.501 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:02.759 EAL: Restoring previous memory policy: 4 00:09:02.759 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.759 EAL: request: mp_malloc_sync 00:09:02.759 EAL: No shared files mode enabled, IPC is disabled 00:09:02.759 EAL: Heap on socket 0 was expanded by 1026MB 00:09:03.018 EAL: Calling mem event callback 'spdk:(nil)' 00:09:03.278 EAL: request: mp_malloc_sync 00:09:03.278 EAL: No shared files mode enabled, IPC is disabled 00:09:03.278 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:03.278 passed 00:09:03.278 00:09:03.278 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.278 suites 1 1 n/a 0 0 00:09:03.278 tests 2 2 2 0 0 00:09:03.278 asserts 5379 5379 5379 0 n/a 00:09:03.278 00:09:03.278 Elapsed time = 1.283 seconds 00:09:03.278 EAL: Calling mem event callback 'spdk:(nil)' 00:09:03.278 EAL: request: mp_malloc_sync 00:09:03.278 EAL: No shared files mode enabled, IPC is disabled 00:09:03.278 EAL: Heap on socket 0 was shrunk by 2MB 00:09:03.278 EAL: No shared files mode enabled, IPC is disabled 00:09:03.278 EAL: No shared files mode enabled, IPC is disabled 00:09:03.278 EAL: No shared files mode enabled, IPC is disabled 00:09:03.278 00:09:03.278 real 0m1.478s 00:09:03.278 user 0m0.802s 00:09:03.278 sys 0m0.542s 00:09:03.278 12:20:32 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.278 12:20:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:03.278 ************************************ 00:09:03.278 END TEST env_vtophys 00:09:03.278 ************************************ 00:09:03.278 12:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:09:03.278 12:20:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:03.278 12:20:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:03.278 12:20:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.278 12:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.278 ************************************ 00:09:03.278 START TEST env_pci 00:09:03.278 ************************************ 00:09:03.278 12:20:32 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:03.278 00:09:03.278 00:09:03.278 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.278 http://cunit.sourceforge.net/ 00:09:03.278 00:09:03.278 00:09:03.278 Suite: pci 00:09:03.278 Test: pci_hook ...[2024-07-12 12:20:32.312074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70788 has claimed it 00:09:03.278 passed 00:09:03.278 00:09:03.278 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.278 suites 1 1 n/a 0 0 00:09:03.278 tests 1 1 1 0 0 00:09:03.278 asserts 25 25 25 0 n/a 00:09:03.278 00:09:03.278 Elapsed time = 0.002 seconds 00:09:03.278 EAL: Cannot find device (10000:00:01.0) 00:09:03.278 EAL: Failed to attach device on primary process 00:09:03.278 00:09:03.278 real 0m0.019s 00:09:03.278 user 0m0.010s 00:09:03.278 sys 0m0.009s 00:09:03.278 12:20:32 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.278 12:20:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:03.278 ************************************ 00:09:03.278 END TEST env_pci 00:09:03.278 ************************************ 00:09:03.278 12:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:09:03.278 12:20:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:03.278 12:20:32 env -- env/env.sh@15 -- # uname 00:09:03.278 12:20:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:03.278 12:20:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:03.278 12:20:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:03.278 12:20:32 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:03.278 12:20:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.537 12:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.537 ************************************ 00:09:03.537 START TEST env_dpdk_post_init 00:09:03.537 ************************************ 00:09:03.537 12:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:03.537 EAL: Detected CPU lcores: 10 00:09:03.537 EAL: Detected NUMA nodes: 1 00:09:03.537 EAL: Detected shared linkage of DPDK 00:09:03.537 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:03.537 EAL: Selected IOVA mode 'PA' 00:09:03.537 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:03.537 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:03.537 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:03.537 Starting DPDK initialization... 00:09:03.537 Starting SPDK post initialization... 00:09:03.537 SPDK NVMe probe 00:09:03.537 Attaching to 0000:00:10.0 00:09:03.537 Attaching to 0000:00:11.0 00:09:03.537 Attached to 0000:00:10.0 00:09:03.537 Attached to 0000:00:11.0 00:09:03.537 Cleaning up... 00:09:03.537 00:09:03.537 real 0m0.189s 00:09:03.537 user 0m0.043s 00:09:03.537 sys 0m0.046s 00:09:03.537 12:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.537 12:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:03.537 ************************************ 00:09:03.537 END TEST env_dpdk_post_init 00:09:03.537 ************************************ 00:09:03.537 12:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:09:03.537 12:20:32 env -- env/env.sh@26 -- # uname 00:09:03.537 12:20:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:03.537 12:20:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:03.537 12:20:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:03.537 12:20:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.537 12:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.537 ************************************ 00:09:03.537 START TEST env_mem_callbacks 00:09:03.537 ************************************ 00:09:03.537 12:20:32 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:03.796 EAL: Detected CPU lcores: 10 00:09:03.796 EAL: Detected NUMA nodes: 1 00:09:03.796 EAL: Detected shared linkage of DPDK 00:09:03.796 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:03.796 EAL: Selected IOVA mode 'PA' 00:09:03.796 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:03.796 00:09:03.796 00:09:03.796 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.796 http://cunit.sourceforge.net/ 00:09:03.796 00:09:03.796 00:09:03.796 Suite: memory 00:09:03.796 Test: test ... 00:09:03.796 register 0x200000200000 2097152 00:09:03.796 malloc 3145728 00:09:03.796 register 0x200000400000 4194304 00:09:03.796 buf 0x200000500000 len 3145728 PASSED 00:09:03.796 malloc 64 00:09:03.796 buf 0x2000004fff40 len 64 PASSED 00:09:03.796 malloc 4194304 00:09:03.796 register 0x200000800000 6291456 00:09:03.796 buf 0x200000a00000 len 4194304 PASSED 00:09:03.796 free 0x200000500000 3145728 00:09:03.796 free 0x2000004fff40 64 00:09:03.796 unregister 0x200000400000 4194304 PASSED 00:09:03.796 free 0x200000a00000 4194304 00:09:03.796 unregister 0x200000800000 6291456 PASSED 00:09:03.796 malloc 8388608 00:09:03.796 register 0x200000400000 10485760 00:09:03.796 buf 0x200000600000 len 8388608 PASSED 00:09:03.796 free 0x200000600000 8388608 00:09:03.796 unregister 0x200000400000 10485760 PASSED 00:09:03.796 passed 00:09:03.796 00:09:03.796 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.796 suites 1 1 n/a 0 0 00:09:03.796 tests 1 1 1 0 0 00:09:03.796 asserts 15 15 15 0 n/a 00:09:03.796 00:09:03.796 Elapsed time = 0.008 seconds 00:09:03.796 00:09:03.796 real 0m0.146s 00:09:03.796 user 0m0.015s 00:09:03.796 sys 0m0.030s 00:09:03.796 12:20:32 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.796 12:20:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:03.796 ************************************ 00:09:03.796 END TEST env_mem_callbacks 00:09:03.796 ************************************ 00:09:03.796 12:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:09:03.796 00:09:03.796 real 0m2.397s 00:09:03.796 user 0m1.204s 00:09:03.796 sys 0m0.842s 00:09:03.796 12:20:32 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.796 12:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.796 ************************************ 00:09:03.796 END TEST env 00:09:03.796 ************************************ 00:09:03.796 12:20:32 -- common/autotest_common.sh@1142 -- # return 0 00:09:03.796 12:20:32 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:03.796 12:20:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:03.796 12:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.796 12:20:32 -- common/autotest_common.sh@10 -- # set +x 00:09:03.796 ************************************ 00:09:03.796 START TEST rpc 00:09:03.796 ************************************ 00:09:03.796 12:20:32 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:04.059 * Looking for test storage... 00:09:04.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:04.059 12:20:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70897 00:09:04.059 12:20:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:04.059 12:20:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.059 12:20:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70897 00:09:04.059 12:20:32 rpc -- common/autotest_common.sh@829 -- # '[' -z 70897 ']' 00:09:04.059 12:20:32 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.059 12:20:32 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.059 12:20:32 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.059 12:20:32 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.059 12:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 [2024-07-12 12:20:33.009460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:04.059 [2024-07-12 12:20:33.009561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:09:04.319 [2024-07-12 12:20:33.151287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.319 [2024-07-12 12:20:33.249211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:04.319 [2024-07-12 12:20:33.249288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70897' to capture a snapshot of events at runtime. 00:09:04.319 [2024-07-12 12:20:33.249316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.319 [2024-07-12 12:20:33.249327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.319 [2024-07-12 12:20:33.249336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70897 for offline analysis/debug. 00:09:04.319 [2024-07-12 12:20:33.249372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.319 [2024-07-12 12:20:33.307254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:05.252 12:20:33 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.252 12:20:33 rpc -- common/autotest_common.sh@862 -- # return 0 00:09:05.252 12:20:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:05.252 12:20:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:05.252 12:20:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:05.252 12:20:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:05.252 12:20:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.252 12:20:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.252 12:20:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 ************************************ 00:09:05.252 START TEST rpc_integrity 00:09:05.252 ************************************ 00:09:05.252 12:20:33 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:05.252 12:20:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:05.252 12:20:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.252 12:20:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 12:20:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.252 12:20:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:05.252 12:20:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:05.252 { 00:09:05.252 "name": "Malloc0", 00:09:05.252 "aliases": [ 00:09:05.252 "d7fa48e0-cebf-4a65-8b21-462bc6eb9061" 00:09:05.252 ], 00:09:05.252 "product_name": "Malloc disk", 00:09:05.252 "block_size": 512, 00:09:05.252 "num_blocks": 16384, 00:09:05.252 "uuid": "d7fa48e0-cebf-4a65-8b21-462bc6eb9061", 00:09:05.252 "assigned_rate_limits": { 00:09:05.252 "rw_ios_per_sec": 0, 00:09:05.252 "rw_mbytes_per_sec": 0, 00:09:05.252 "r_mbytes_per_sec": 0, 00:09:05.252 "w_mbytes_per_sec": 0 00:09:05.252 }, 00:09:05.252 "claimed": false, 00:09:05.252 "zoned": false, 00:09:05.252 "supported_io_types": { 00:09:05.252 "read": true, 00:09:05.252 "write": true, 00:09:05.252 "unmap": true, 00:09:05.252 "flush": true, 00:09:05.252 "reset": true, 00:09:05.252 "nvme_admin": false, 00:09:05.252 "nvme_io": false, 00:09:05.252 "nvme_io_md": false, 00:09:05.252 "write_zeroes": true, 00:09:05.252 "zcopy": true, 00:09:05.252 "get_zone_info": false, 00:09:05.252 "zone_management": false, 00:09:05.252 "zone_append": false, 00:09:05.252 "compare": false, 00:09:05.252 "compare_and_write": false, 00:09:05.252 "abort": true, 00:09:05.252 "seek_hole": false, 00:09:05.252 "seek_data": false, 00:09:05.252 "copy": true, 00:09:05.252 "nvme_iov_md": false 00:09:05.252 }, 00:09:05.252 "memory_domains": [ 00:09:05.252 { 00:09:05.252 "dma_device_id": "system", 00:09:05.252 "dma_device_type": 1 00:09:05.252 }, 00:09:05.252 { 00:09:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.252 "dma_device_type": 2 00:09:05.252 } 00:09:05.252 ], 00:09:05.252 "driver_specific": {} 00:09:05.252 } 00:09:05.252 ]' 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 [2024-07-12 12:20:34.139143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:05.252 [2024-07-12 12:20:34.139277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.252 [2024-07-12 12:20:34.139299] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x815070 00:09:05.252 [2024-07-12 12:20:34.139309] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.252 [2024-07-12 12:20:34.141185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.252 [2024-07-12 12:20:34.141254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:05.252 Passthru0 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:05.252 { 00:09:05.252 "name": "Malloc0", 00:09:05.252 "aliases": [ 00:09:05.252 "d7fa48e0-cebf-4a65-8b21-462bc6eb9061" 00:09:05.252 ], 00:09:05.252 "product_name": "Malloc disk", 00:09:05.252 "block_size": 512, 00:09:05.252 "num_blocks": 16384, 00:09:05.252 "uuid": "d7fa48e0-cebf-4a65-8b21-462bc6eb9061", 00:09:05.252 "assigned_rate_limits": { 00:09:05.252 "rw_ios_per_sec": 0, 00:09:05.252 "rw_mbytes_per_sec": 0, 00:09:05.252 "r_mbytes_per_sec": 0, 00:09:05.252 "w_mbytes_per_sec": 0 00:09:05.252 }, 00:09:05.252 "claimed": true, 00:09:05.252 "claim_type": "exclusive_write", 00:09:05.252 "zoned": false, 00:09:05.252 "supported_io_types": { 00:09:05.252 "read": true, 00:09:05.252 "write": true, 00:09:05.252 "unmap": true, 00:09:05.252 "flush": true, 00:09:05.252 "reset": true, 00:09:05.252 "nvme_admin": false, 00:09:05.252 "nvme_io": false, 00:09:05.252 "nvme_io_md": false, 00:09:05.252 "write_zeroes": true, 00:09:05.252 "zcopy": true, 00:09:05.252 "get_zone_info": false, 00:09:05.252 "zone_management": false, 00:09:05.252 "zone_append": false, 00:09:05.252 "compare": false, 00:09:05.252 "compare_and_write": false, 00:09:05.252 "abort": true, 00:09:05.252 "seek_hole": false, 00:09:05.252 "seek_data": false, 00:09:05.252 "copy": true, 00:09:05.252 "nvme_iov_md": false 00:09:05.252 }, 00:09:05.252 "memory_domains": [ 00:09:05.252 { 00:09:05.252 "dma_device_id": "system", 00:09:05.252 "dma_device_type": 1 00:09:05.252 }, 00:09:05.252 { 00:09:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.252 "dma_device_type": 2 00:09:05.252 } 00:09:05.252 ], 00:09:05.252 "driver_specific": {} 00:09:05.252 }, 00:09:05.252 { 00:09:05.252 "name": "Passthru0", 00:09:05.252 "aliases": [ 00:09:05.252 "99dcf41f-ae8b-591f-bbd8-64bb8cb10b47" 00:09:05.252 ], 00:09:05.252 "product_name": "passthru", 00:09:05.252 "block_size": 512, 00:09:05.252 "num_blocks": 16384, 00:09:05.252 "uuid": "99dcf41f-ae8b-591f-bbd8-64bb8cb10b47", 00:09:05.252 "assigned_rate_limits": { 00:09:05.252 "rw_ios_per_sec": 0, 00:09:05.252 "rw_mbytes_per_sec": 0, 00:09:05.252 "r_mbytes_per_sec": 0, 00:09:05.252 "w_mbytes_per_sec": 0 00:09:05.252 }, 00:09:05.252 "claimed": false, 00:09:05.252 "zoned": false, 00:09:05.252 "supported_io_types": { 00:09:05.252 "read": true, 00:09:05.252 "write": true, 00:09:05.252 "unmap": true, 00:09:05.252 "flush": true, 00:09:05.252 "reset": true, 00:09:05.252 "nvme_admin": false, 00:09:05.252 "nvme_io": false, 00:09:05.252 "nvme_io_md": false, 00:09:05.252 "write_zeroes": true, 00:09:05.252 "zcopy": true, 00:09:05.252 "get_zone_info": false, 00:09:05.252 "zone_management": false, 00:09:05.252 "zone_append": false, 00:09:05.252 "compare": false, 00:09:05.252 "compare_and_write": false, 00:09:05.252 "abort": true, 00:09:05.252 "seek_hole": false, 00:09:05.252 "seek_data": false, 00:09:05.252 "copy": true, 00:09:05.252 "nvme_iov_md": false 00:09:05.252 }, 00:09:05.252 "memory_domains": [ 00:09:05.252 { 00:09:05.252 "dma_device_id": "system", 00:09:05.252 "dma_device_type": 1 00:09:05.252 }, 00:09:05.252 { 00:09:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.252 "dma_device_type": 2 00:09:05.252 } 00:09:05.252 ], 00:09:05.252 "driver_specific": { 00:09:05.252 "passthru": { 00:09:05.252 "name": "Passthru0", 00:09:05.252 "base_bdev_name": "Malloc0" 00:09:05.252 } 00:09:05.252 } 00:09:05.252 } 00:09:05.252 ]' 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:05.252 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.252 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.253 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.253 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.253 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:05.253 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:05.253 12:20:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:05.253 00:09:05.253 real 0m0.339s 00:09:05.253 user 0m0.222s 00:09:05.253 sys 0m0.047s 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.253 12:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.253 ************************************ 00:09:05.253 END TEST rpc_integrity 00:09:05.253 ************************************ 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:05.511 12:20:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 ************************************ 00:09:05.511 START TEST rpc_plugins 00:09:05.511 ************************************ 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:05.511 { 00:09:05.511 "name": "Malloc1", 00:09:05.511 "aliases": [ 00:09:05.511 "6265b535-519e-4538-aae5-73ea0a03511a" 00:09:05.511 ], 00:09:05.511 "product_name": "Malloc disk", 00:09:05.511 "block_size": 4096, 00:09:05.511 "num_blocks": 256, 00:09:05.511 "uuid": "6265b535-519e-4538-aae5-73ea0a03511a", 00:09:05.511 "assigned_rate_limits": { 00:09:05.511 "rw_ios_per_sec": 0, 00:09:05.511 "rw_mbytes_per_sec": 0, 00:09:05.511 "r_mbytes_per_sec": 0, 00:09:05.511 "w_mbytes_per_sec": 0 00:09:05.511 }, 00:09:05.511 "claimed": false, 00:09:05.511 "zoned": false, 00:09:05.511 "supported_io_types": { 00:09:05.511 "read": true, 00:09:05.511 "write": true, 00:09:05.511 "unmap": true, 00:09:05.511 "flush": true, 00:09:05.511 "reset": true, 00:09:05.511 "nvme_admin": false, 00:09:05.511 "nvme_io": false, 00:09:05.511 "nvme_io_md": false, 00:09:05.511 "write_zeroes": true, 00:09:05.511 "zcopy": true, 00:09:05.511 "get_zone_info": false, 00:09:05.511 "zone_management": false, 00:09:05.511 "zone_append": false, 00:09:05.511 "compare": false, 00:09:05.511 "compare_and_write": false, 00:09:05.511 "abort": true, 00:09:05.511 "seek_hole": false, 00:09:05.511 "seek_data": false, 00:09:05.511 "copy": true, 00:09:05.511 "nvme_iov_md": false 00:09:05.511 }, 00:09:05.511 "memory_domains": [ 00:09:05.511 { 00:09:05.511 "dma_device_id": "system", 00:09:05.511 "dma_device_type": 1 00:09:05.511 }, 00:09:05.511 { 00:09:05.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.511 "dma_device_type": 2 00:09:05.511 } 00:09:05.511 ], 00:09:05.511 "driver_specific": {} 00:09:05.511 } 00:09:05.511 ]' 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:05.511 12:20:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:05.511 00:09:05.511 real 0m0.158s 00:09:05.511 user 0m0.104s 00:09:05.511 sys 0m0.020s 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.511 ************************************ 00:09:05.511 END TEST rpc_plugins 00:09:05.511 12:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 ************************************ 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:05.511 12:20:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.511 12:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.511 ************************************ 00:09:05.511 START TEST rpc_trace_cmd_test 00:09:05.511 ************************************ 00:09:05.511 12:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:09:05.511 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:05.511 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:05.511 12:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.511 12:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:05.769 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70897", 00:09:05.769 "tpoint_group_mask": "0x8", 00:09:05.769 "iscsi_conn": { 00:09:05.769 "mask": "0x2", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "scsi": { 00:09:05.769 "mask": "0x4", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "bdev": { 00:09:05.769 "mask": "0x8", 00:09:05.769 "tpoint_mask": "0xffffffffffffffff" 00:09:05.769 }, 00:09:05.769 "nvmf_rdma": { 00:09:05.769 "mask": "0x10", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "nvmf_tcp": { 00:09:05.769 "mask": "0x20", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "ftl": { 00:09:05.769 "mask": "0x40", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "blobfs": { 00:09:05.769 "mask": "0x80", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "dsa": { 00:09:05.769 "mask": "0x200", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "thread": { 00:09:05.769 "mask": "0x400", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "nvme_pcie": { 00:09:05.769 "mask": "0x800", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "iaa": { 00:09:05.769 "mask": "0x1000", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "nvme_tcp": { 00:09:05.769 "mask": "0x2000", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "bdev_nvme": { 00:09:05.769 "mask": "0x4000", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 }, 00:09:05.769 "sock": { 00:09:05.769 "mask": "0x8000", 00:09:05.769 "tpoint_mask": "0x0" 00:09:05.769 } 00:09:05.769 }' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:05.769 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:06.026 ************************************ 00:09:06.027 END TEST rpc_trace_cmd_test 00:09:06.027 ************************************ 00:09:06.027 12:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:06.027 00:09:06.027 real 0m0.292s 00:09:06.027 user 0m0.249s 00:09:06.027 sys 0m0.026s 00:09:06.027 12:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.027 12:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.027 12:20:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:06.027 12:20:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:06.027 12:20:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:06.027 12:20:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:06.027 12:20:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.027 12:20:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.027 12:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.027 ************************************ 00:09:06.027 START TEST rpc_daemon_integrity 00:09:06.027 ************************************ 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.027 12:20:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:06.027 { 00:09:06.027 "name": "Malloc2", 00:09:06.027 "aliases": [ 00:09:06.027 "45494de8-da43-4bf7-9b2a-52f95a2845fd" 00:09:06.027 ], 00:09:06.027 "product_name": "Malloc disk", 00:09:06.027 "block_size": 512, 00:09:06.027 "num_blocks": 16384, 00:09:06.027 "uuid": "45494de8-da43-4bf7-9b2a-52f95a2845fd", 00:09:06.027 "assigned_rate_limits": { 00:09:06.027 "rw_ios_per_sec": 0, 00:09:06.027 "rw_mbytes_per_sec": 0, 00:09:06.027 "r_mbytes_per_sec": 0, 00:09:06.027 "w_mbytes_per_sec": 0 00:09:06.027 }, 00:09:06.027 "claimed": false, 00:09:06.027 "zoned": false, 00:09:06.027 "supported_io_types": { 00:09:06.027 "read": true, 00:09:06.027 "write": true, 00:09:06.027 "unmap": true, 00:09:06.027 "flush": true, 00:09:06.027 "reset": true, 00:09:06.027 "nvme_admin": false, 00:09:06.027 "nvme_io": false, 00:09:06.027 "nvme_io_md": false, 00:09:06.027 "write_zeroes": true, 00:09:06.027 "zcopy": true, 00:09:06.027 "get_zone_info": false, 00:09:06.027 "zone_management": false, 00:09:06.027 "zone_append": false, 00:09:06.027 "compare": false, 00:09:06.027 "compare_and_write": false, 00:09:06.027 "abort": true, 00:09:06.027 "seek_hole": false, 00:09:06.027 "seek_data": false, 00:09:06.027 "copy": true, 00:09:06.027 "nvme_iov_md": false 00:09:06.027 }, 00:09:06.027 "memory_domains": [ 00:09:06.027 { 00:09:06.027 "dma_device_id": "system", 00:09:06.027 "dma_device_type": 1 00:09:06.027 }, 00:09:06.027 { 00:09:06.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.027 "dma_device_type": 2 00:09:06.027 } 00:09:06.027 ], 00:09:06.027 "driver_specific": {} 00:09:06.027 } 00:09:06.027 ]' 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.027 [2024-07-12 12:20:35.080150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:06.027 [2024-07-12 12:20:35.080220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.027 [2024-07-12 12:20:35.080258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x806a10 00:09:06.027 [2024-07-12 12:20:35.080275] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.027 [2024-07-12 12:20:35.081918] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.027 [2024-07-12 12:20:35.081958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:06.027 Passthru0 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.027 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.285 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.285 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:06.285 { 00:09:06.285 "name": "Malloc2", 00:09:06.285 "aliases": [ 00:09:06.285 "45494de8-da43-4bf7-9b2a-52f95a2845fd" 00:09:06.285 ], 00:09:06.285 "product_name": "Malloc disk", 00:09:06.285 "block_size": 512, 00:09:06.285 "num_blocks": 16384, 00:09:06.285 "uuid": "45494de8-da43-4bf7-9b2a-52f95a2845fd", 00:09:06.285 "assigned_rate_limits": { 00:09:06.285 "rw_ios_per_sec": 0, 00:09:06.285 "rw_mbytes_per_sec": 0, 00:09:06.285 "r_mbytes_per_sec": 0, 00:09:06.285 "w_mbytes_per_sec": 0 00:09:06.285 }, 00:09:06.285 "claimed": true, 00:09:06.285 "claim_type": "exclusive_write", 00:09:06.285 "zoned": false, 00:09:06.285 "supported_io_types": { 00:09:06.285 "read": true, 00:09:06.285 "write": true, 00:09:06.285 "unmap": true, 00:09:06.285 "flush": true, 00:09:06.285 "reset": true, 00:09:06.285 "nvme_admin": false, 00:09:06.285 "nvme_io": false, 00:09:06.285 "nvme_io_md": false, 00:09:06.285 "write_zeroes": true, 00:09:06.285 "zcopy": true, 00:09:06.285 "get_zone_info": false, 00:09:06.285 "zone_management": false, 00:09:06.285 "zone_append": false, 00:09:06.285 "compare": false, 00:09:06.285 "compare_and_write": false, 00:09:06.285 "abort": true, 00:09:06.285 "seek_hole": false, 00:09:06.285 "seek_data": false, 00:09:06.285 "copy": true, 00:09:06.285 "nvme_iov_md": false 00:09:06.285 }, 00:09:06.285 "memory_domains": [ 00:09:06.285 { 00:09:06.285 "dma_device_id": "system", 00:09:06.285 "dma_device_type": 1 00:09:06.285 }, 00:09:06.285 { 00:09:06.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.285 "dma_device_type": 2 00:09:06.285 } 00:09:06.285 ], 00:09:06.285 "driver_specific": {} 00:09:06.285 }, 00:09:06.285 { 00:09:06.285 "name": "Passthru0", 00:09:06.285 "aliases": [ 00:09:06.285 "4f3ef528-15c7-5c93-af84-bf1220414e1a" 00:09:06.285 ], 00:09:06.285 "product_name": "passthru", 00:09:06.285 "block_size": 512, 00:09:06.285 "num_blocks": 16384, 00:09:06.285 "uuid": "4f3ef528-15c7-5c93-af84-bf1220414e1a", 00:09:06.285 "assigned_rate_limits": { 00:09:06.285 "rw_ios_per_sec": 0, 00:09:06.285 "rw_mbytes_per_sec": 0, 00:09:06.285 "r_mbytes_per_sec": 0, 00:09:06.285 "w_mbytes_per_sec": 0 00:09:06.285 }, 00:09:06.285 "claimed": false, 00:09:06.285 "zoned": false, 00:09:06.285 "supported_io_types": { 00:09:06.285 "read": true, 00:09:06.285 "write": true, 00:09:06.285 "unmap": true, 00:09:06.285 "flush": true, 00:09:06.285 "reset": true, 00:09:06.285 "nvme_admin": false, 00:09:06.285 "nvme_io": false, 00:09:06.285 "nvme_io_md": false, 00:09:06.285 "write_zeroes": true, 00:09:06.285 "zcopy": true, 00:09:06.285 "get_zone_info": false, 00:09:06.285 "zone_management": false, 00:09:06.285 "zone_append": false, 00:09:06.285 "compare": false, 00:09:06.285 "compare_and_write": false, 00:09:06.285 "abort": true, 00:09:06.285 "seek_hole": false, 00:09:06.285 "seek_data": false, 00:09:06.285 "copy": true, 00:09:06.285 "nvme_iov_md": false 00:09:06.285 }, 00:09:06.285 "memory_domains": [ 00:09:06.285 { 00:09:06.285 "dma_device_id": "system", 00:09:06.285 "dma_device_type": 1 00:09:06.285 }, 00:09:06.285 { 00:09:06.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.285 "dma_device_type": 2 00:09:06.285 } 00:09:06.285 ], 00:09:06.285 "driver_specific": { 00:09:06.285 "passthru": { 00:09:06.285 "name": "Passthru0", 00:09:06.285 "base_bdev_name": "Malloc2" 00:09:06.285 } 00:09:06.285 } 00:09:06.285 } 00:09:06.285 ]' 00:09:06.285 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:06.285 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:06.286 ************************************ 00:09:06.286 END TEST rpc_daemon_integrity 00:09:06.286 ************************************ 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:06.286 00:09:06.286 real 0m0.332s 00:09:06.286 user 0m0.229s 00:09:06.286 sys 0m0.035s 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.286 12:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:06.286 12:20:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:06.286 12:20:35 rpc -- rpc/rpc.sh@84 -- # killprocess 70897 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@948 -- # '[' -z 70897 ']' 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@952 -- # kill -0 70897 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@953 -- # uname 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70897 00:09:06.286 killing process with pid 70897 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70897' 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@967 -- # kill 70897 00:09:06.286 12:20:35 rpc -- common/autotest_common.sh@972 -- # wait 70897 00:09:06.851 ************************************ 00:09:06.851 END TEST rpc 00:09:06.851 ************************************ 00:09:06.851 00:09:06.851 real 0m2.850s 00:09:06.851 user 0m3.717s 00:09:06.851 sys 0m0.679s 00:09:06.851 12:20:35 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.851 12:20:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.851 12:20:35 -- common/autotest_common.sh@1142 -- # return 0 00:09:06.851 12:20:35 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:06.851 12:20:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.851 12:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.851 12:20:35 -- common/autotest_common.sh@10 -- # set +x 00:09:06.851 ************************************ 00:09:06.851 START TEST skip_rpc 00:09:06.851 ************************************ 00:09:06.851 12:20:35 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:06.851 * Looking for test storage... 00:09:06.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:06.851 12:20:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:06.851 12:20:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:06.851 12:20:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:06.851 12:20:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:06.851 12:20:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.851 12:20:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.851 ************************************ 00:09:06.851 START TEST skip_rpc 00:09:06.851 ************************************ 00:09:06.851 12:20:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:09:06.851 12:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71090 00:09:06.851 12:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.851 12:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:06.851 12:20:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:06.851 [2024-07-12 12:20:35.910006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:06.851 [2024-07-12 12:20:35.910194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:09:07.110 [2024-07-12 12:20:36.050672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.110 [2024-07-12 12:20:36.138027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.110 [2024-07-12 12:20:36.192583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71090 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 71090 ']' 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 71090 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71090 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.371 killing process with pid 71090 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71090' 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 71090 00:09:12.371 12:20:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 71090 00:09:12.371 00:09:12.371 real 0m5.417s 00:09:12.371 user 0m5.036s 00:09:12.371 sys 0m0.282s 00:09:12.372 12:20:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.372 12:20:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 ************************************ 00:09:12.372 END TEST skip_rpc 00:09:12.372 ************************************ 00:09:12.372 12:20:41 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:12.372 12:20:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:12.372 12:20:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.372 12:20:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.372 12:20:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 ************************************ 00:09:12.372 START TEST skip_rpc_with_json 00:09:12.372 ************************************ 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71182 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71182 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 71182 ']' 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.372 12:20:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:12.372 [2024-07-12 12:20:41.376428] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:12.372 [2024-07-12 12:20:41.376548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71182 ] 00:09:12.630 [2024-07-12 12:20:41.508360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.630 [2024-07-12 12:20:41.597821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.630 [2024-07-12 12:20:41.651283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:13.567 [2024-07-12 12:20:42.315959] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:13.567 request: 00:09:13.567 { 00:09:13.567 "trtype": "tcp", 00:09:13.567 "method": "nvmf_get_transports", 00:09:13.567 "req_id": 1 00:09:13.567 } 00:09:13.567 Got JSON-RPC error response 00:09:13.567 response: 00:09:13.567 { 00:09:13.567 "code": -19, 00:09:13.567 "message": "No such device" 00:09:13.567 } 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:13.567 [2024-07-12 12:20:42.332086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.567 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.567 { 00:09:13.567 "subsystems": [ 00:09:13.567 { 00:09:13.567 "subsystem": "keyring", 00:09:13.567 "config": [] 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "subsystem": "iobuf", 00:09:13.567 "config": [ 00:09:13.567 { 00:09:13.567 "method": "iobuf_set_options", 00:09:13.567 "params": { 00:09:13.567 "small_pool_count": 8192, 00:09:13.567 "large_pool_count": 1024, 00:09:13.567 "small_bufsize": 8192, 00:09:13.567 "large_bufsize": 135168 00:09:13.567 } 00:09:13.567 } 00:09:13.567 ] 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "subsystem": "sock", 00:09:13.567 "config": [ 00:09:13.567 { 00:09:13.567 "method": "sock_set_default_impl", 00:09:13.567 "params": { 00:09:13.567 "impl_name": "uring" 00:09:13.567 } 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "method": "sock_impl_set_options", 00:09:13.567 "params": { 00:09:13.567 "impl_name": "ssl", 00:09:13.567 "recv_buf_size": 4096, 00:09:13.567 "send_buf_size": 4096, 00:09:13.567 "enable_recv_pipe": true, 00:09:13.567 "enable_quickack": false, 00:09:13.567 "enable_placement_id": 0, 00:09:13.567 "enable_zerocopy_send_server": true, 00:09:13.567 "enable_zerocopy_send_client": false, 00:09:13.567 "zerocopy_threshold": 0, 00:09:13.567 "tls_version": 0, 00:09:13.567 "enable_ktls": false 00:09:13.567 } 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "method": "sock_impl_set_options", 00:09:13.567 "params": { 00:09:13.567 "impl_name": "posix", 00:09:13.567 "recv_buf_size": 2097152, 00:09:13.567 "send_buf_size": 2097152, 00:09:13.567 "enable_recv_pipe": true, 00:09:13.567 "enable_quickack": false, 00:09:13.567 "enable_placement_id": 0, 00:09:13.567 "enable_zerocopy_send_server": true, 00:09:13.567 "enable_zerocopy_send_client": false, 00:09:13.567 "zerocopy_threshold": 0, 00:09:13.567 "tls_version": 0, 00:09:13.567 "enable_ktls": false 00:09:13.567 } 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "method": "sock_impl_set_options", 00:09:13.567 "params": { 00:09:13.567 "impl_name": "uring", 00:09:13.567 "recv_buf_size": 2097152, 00:09:13.567 "send_buf_size": 2097152, 00:09:13.567 "enable_recv_pipe": true, 00:09:13.567 "enable_quickack": false, 00:09:13.567 "enable_placement_id": 0, 00:09:13.567 "enable_zerocopy_send_server": false, 00:09:13.567 "enable_zerocopy_send_client": false, 00:09:13.567 "zerocopy_threshold": 0, 00:09:13.567 "tls_version": 0, 00:09:13.567 "enable_ktls": false 00:09:13.567 } 00:09:13.567 } 00:09:13.567 ] 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "subsystem": "vmd", 00:09:13.567 "config": [] 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "subsystem": "accel", 00:09:13.567 "config": [ 00:09:13.567 { 00:09:13.567 "method": "accel_set_options", 00:09:13.567 "params": { 00:09:13.567 "small_cache_size": 128, 00:09:13.567 "large_cache_size": 16, 00:09:13.567 "task_count": 2048, 00:09:13.567 "sequence_count": 2048, 00:09:13.567 "buf_count": 2048 00:09:13.567 } 00:09:13.567 } 00:09:13.567 ] 00:09:13.567 }, 00:09:13.567 { 00:09:13.567 "subsystem": "bdev", 00:09:13.567 "config": [ 00:09:13.567 { 00:09:13.567 "method": "bdev_set_options", 00:09:13.567 "params": { 00:09:13.567 "bdev_io_pool_size": 65535, 00:09:13.567 "bdev_io_cache_size": 256, 00:09:13.567 "bdev_auto_examine": true, 00:09:13.568 "iobuf_small_cache_size": 128, 00:09:13.568 "iobuf_large_cache_size": 16 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "bdev_raid_set_options", 00:09:13.568 "params": { 00:09:13.568 "process_window_size_kb": 1024 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "bdev_iscsi_set_options", 00:09:13.568 "params": { 00:09:13.568 "timeout_sec": 30 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "bdev_nvme_set_options", 00:09:13.568 "params": { 00:09:13.568 "action_on_timeout": "none", 00:09:13.568 "timeout_us": 0, 00:09:13.568 "timeout_admin_us": 0, 00:09:13.568 "keep_alive_timeout_ms": 10000, 00:09:13.568 "arbitration_burst": 0, 00:09:13.568 "low_priority_weight": 0, 00:09:13.568 "medium_priority_weight": 0, 00:09:13.568 "high_priority_weight": 0, 00:09:13.568 "nvme_adminq_poll_period_us": 10000, 00:09:13.568 "nvme_ioq_poll_period_us": 0, 00:09:13.568 "io_queue_requests": 0, 00:09:13.568 "delay_cmd_submit": true, 00:09:13.568 "transport_retry_count": 4, 00:09:13.568 "bdev_retry_count": 3, 00:09:13.568 "transport_ack_timeout": 0, 00:09:13.568 "ctrlr_loss_timeout_sec": 0, 00:09:13.568 "reconnect_delay_sec": 0, 00:09:13.568 "fast_io_fail_timeout_sec": 0, 00:09:13.568 "disable_auto_failback": false, 00:09:13.568 "generate_uuids": false, 00:09:13.568 "transport_tos": 0, 00:09:13.568 "nvme_error_stat": false, 00:09:13.568 "rdma_srq_size": 0, 00:09:13.568 "io_path_stat": false, 00:09:13.568 "allow_accel_sequence": false, 00:09:13.568 "rdma_max_cq_size": 0, 00:09:13.568 "rdma_cm_event_timeout_ms": 0, 00:09:13.568 "dhchap_digests": [ 00:09:13.568 "sha256", 00:09:13.568 "sha384", 00:09:13.568 "sha512" 00:09:13.568 ], 00:09:13.568 "dhchap_dhgroups": [ 00:09:13.568 "null", 00:09:13.568 "ffdhe2048", 00:09:13.568 "ffdhe3072", 00:09:13.568 "ffdhe4096", 00:09:13.568 "ffdhe6144", 00:09:13.568 "ffdhe8192" 00:09:13.568 ] 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "bdev_nvme_set_hotplug", 00:09:13.568 "params": { 00:09:13.568 "period_us": 100000, 00:09:13.568 "enable": false 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "bdev_wait_for_examine" 00:09:13.568 } 00:09:13.568 ] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "scsi", 00:09:13.568 "config": null 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "scheduler", 00:09:13.568 "config": [ 00:09:13.568 { 00:09:13.568 "method": "framework_set_scheduler", 00:09:13.568 "params": { 00:09:13.568 "name": "static" 00:09:13.568 } 00:09:13.568 } 00:09:13.568 ] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "vhost_scsi", 00:09:13.568 "config": [] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "vhost_blk", 00:09:13.568 "config": [] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "ublk", 00:09:13.568 "config": [] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "nbd", 00:09:13.568 "config": [] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "nvmf", 00:09:13.568 "config": [ 00:09:13.568 { 00:09:13.568 "method": "nvmf_set_config", 00:09:13.568 "params": { 00:09:13.568 "discovery_filter": "match_any", 00:09:13.568 "admin_cmd_passthru": { 00:09:13.568 "identify_ctrlr": false 00:09:13.568 } 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "nvmf_set_max_subsystems", 00:09:13.568 "params": { 00:09:13.568 "max_subsystems": 1024 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "nvmf_set_crdt", 00:09:13.568 "params": { 00:09:13.568 "crdt1": 0, 00:09:13.568 "crdt2": 0, 00:09:13.568 "crdt3": 0 00:09:13.568 } 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "method": "nvmf_create_transport", 00:09:13.568 "params": { 00:09:13.568 "trtype": "TCP", 00:09:13.568 "max_queue_depth": 128, 00:09:13.568 "max_io_qpairs_per_ctrlr": 127, 00:09:13.568 "in_capsule_data_size": 4096, 00:09:13.568 "max_io_size": 131072, 00:09:13.568 "io_unit_size": 131072, 00:09:13.568 "max_aq_depth": 128, 00:09:13.568 "num_shared_buffers": 511, 00:09:13.568 "buf_cache_size": 4294967295, 00:09:13.568 "dif_insert_or_strip": false, 00:09:13.568 "zcopy": false, 00:09:13.568 "c2h_success": true, 00:09:13.568 "sock_priority": 0, 00:09:13.568 "abort_timeout_sec": 1, 00:09:13.568 "ack_timeout": 0, 00:09:13.568 "data_wr_pool_size": 0 00:09:13.568 } 00:09:13.568 } 00:09:13.568 ] 00:09:13.568 }, 00:09:13.568 { 00:09:13.568 "subsystem": "iscsi", 00:09:13.568 "config": [ 00:09:13.568 { 00:09:13.568 "method": "iscsi_set_options", 00:09:13.568 "params": { 00:09:13.568 "node_base": "iqn.2016-06.io.spdk", 00:09:13.568 "max_sessions": 128, 00:09:13.568 "max_connections_per_session": 2, 00:09:13.568 "max_queue_depth": 64, 00:09:13.568 "default_time2wait": 2, 00:09:13.568 "default_time2retain": 20, 00:09:13.568 "first_burst_length": 8192, 00:09:13.568 "immediate_data": true, 00:09:13.568 "allow_duplicated_isid": false, 00:09:13.568 "error_recovery_level": 0, 00:09:13.568 "nop_timeout": 60, 00:09:13.568 "nop_in_interval": 30, 00:09:13.568 "disable_chap": false, 00:09:13.568 "require_chap": false, 00:09:13.568 "mutual_chap": false, 00:09:13.568 "chap_group": 0, 00:09:13.568 "max_large_datain_per_connection": 64, 00:09:13.568 "max_r2t_per_connection": 4, 00:09:13.568 "pdu_pool_size": 36864, 00:09:13.568 "immediate_data_pool_size": 16384, 00:09:13.568 "data_out_pool_size": 2048 00:09:13.568 } 00:09:13.568 } 00:09:13.568 ] 00:09:13.568 } 00:09:13.568 ] 00:09:13.568 } 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71182 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71182 ']' 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71182 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71182 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.568 killing process with pid 71182 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71182' 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71182 00:09:13.568 12:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71182 00:09:13.827 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71204 00:09:13.827 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.827 12:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71204 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 71204 ']' 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 71204 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71204 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.150 killing process with pid 71204 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71204' 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 71204 00:09:19.150 12:20:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 71204 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:19.409 00:09:19.409 real 0m6.989s 00:09:19.409 user 0m6.688s 00:09:19.409 sys 0m0.636s 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:19.409 ************************************ 00:09:19.409 END TEST skip_rpc_with_json 00:09:19.409 ************************************ 00:09:19.409 12:20:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:19.409 12:20:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:19.409 12:20:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:19.409 12:20:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.409 12:20:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.409 ************************************ 00:09:19.409 START TEST skip_rpc_with_delay 00:09:19.409 ************************************ 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:19.409 [2024-07-12 12:20:48.417894] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:19.409 [2024-07-12 12:20:48.418031] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:19.409 00:09:19.409 real 0m0.082s 00:09:19.409 user 0m0.050s 00:09:19.409 sys 0m0.031s 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.409 12:20:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:19.409 ************************************ 00:09:19.409 END TEST skip_rpc_with_delay 00:09:19.409 ************************************ 00:09:19.409 12:20:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:19.409 12:20:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:19.409 12:20:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:19.410 12:20:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:19.410 12:20:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:19.410 12:20:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.410 12:20:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.410 ************************************ 00:09:19.410 START TEST exit_on_failed_rpc_init 00:09:19.410 ************************************ 00:09:19.410 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71319 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71319 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 71319 ']' 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.668 12:20:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:19.668 [2024-07-12 12:20:48.558719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:19.668 [2024-07-12 12:20:48.558892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71319 ] 00:09:19.668 [2024-07-12 12:20:48.697112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.927 [2024-07-12 12:20:48.791189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.927 [2024-07-12 12:20:48.852734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:20.494 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:20.494 [2024-07-12 12:20:49.570819] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:20.494 [2024-07-12 12:20:49.570903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71337 ] 00:09:20.753 [2024-07-12 12:20:49.707383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.753 [2024-07-12 12:20:49.798338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.753 [2024-07-12 12:20:49.798457] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:20.753 [2024-07-12 12:20:49.798475] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:20.753 [2024-07-12 12:20:49.798486] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71319 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 71319 ']' 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 71319 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71319 00:09:21.010 killing process with pid 71319 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71319' 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 71319 00:09:21.010 12:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 71319 00:09:21.269 ************************************ 00:09:21.269 END TEST exit_on_failed_rpc_init 00:09:21.269 ************************************ 00:09:21.269 00:09:21.269 real 0m1.788s 00:09:21.269 user 0m2.021s 00:09:21.269 sys 0m0.436s 00:09:21.269 12:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.269 12:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:21.269 12:20:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:21.269 12:20:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:21.269 ************************************ 00:09:21.269 END TEST skip_rpc 00:09:21.269 ************************************ 00:09:21.269 00:09:21.269 real 0m14.574s 00:09:21.269 user 0m13.893s 00:09:21.269 sys 0m1.571s 00:09:21.269 12:20:50 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.269 12:20:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 12:20:50 -- common/autotest_common.sh@1142 -- # return 0 00:09:21.527 12:20:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:21.527 12:20:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.527 12:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.527 12:20:50 -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 ************************************ 00:09:21.527 START TEST rpc_client 00:09:21.527 ************************************ 00:09:21.527 12:20:50 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:21.527 * Looking for test storage... 00:09:21.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:21.527 12:20:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:21.527 OK 00:09:21.527 12:20:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:21.527 00:09:21.527 real 0m0.102s 00:09:21.527 user 0m0.052s 00:09:21.527 sys 0m0.057s 00:09:21.527 12:20:50 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.527 ************************************ 00:09:21.527 END TEST rpc_client 00:09:21.527 12:20:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 ************************************ 00:09:21.527 12:20:50 -- common/autotest_common.sh@1142 -- # return 0 00:09:21.527 12:20:50 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:21.527 12:20:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:21.527 12:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.527 12:20:50 -- common/autotest_common.sh@10 -- # set +x 00:09:21.527 ************************************ 00:09:21.527 START TEST json_config 00:09:21.527 ************************************ 00:09:21.527 12:20:50 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:21.527 12:20:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.527 12:20:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.785 12:20:50 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.785 12:20:50 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.785 12:20:50 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.785 12:20:50 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.786 12:20:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.786 12:20:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.786 12:20:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.786 12:20:50 json_config -- paths/export.sh@5 -- # export PATH 00:09:21.786 12:20:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@47 -- # : 0 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.786 12:20:50 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:21.786 INFO: JSON configuration test init 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.786 12:20:50 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:21.786 12:20:50 json_config -- json_config/common.sh@9 -- # local app=target 00:09:21.786 12:20:50 json_config -- json_config/common.sh@10 -- # shift 00:09:21.786 12:20:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:21.786 12:20:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:21.786 12:20:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:21.786 12:20:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.786 12:20:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.786 12:20:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71455 00:09:21.786 12:20:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:21.786 Waiting for target to run... 00:09:21.786 12:20:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:21.786 12:20:50 json_config -- json_config/common.sh@25 -- # waitforlisten 71455 /var/tmp/spdk_tgt.sock 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@829 -- # '[' -z 71455 ']' 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:21.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.786 12:20:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.786 [2024-07-12 12:20:50.700715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:21.786 [2024-07-12 12:20:50.701650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:09:22.093 [2024-07-12 12:20:51.150486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.351 [2024-07-12 12:20:51.220379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.917 00:09:22.917 12:20:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.917 12:20:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:22.917 12:20:51 json_config -- json_config/common.sh@26 -- # echo '' 00:09:22.917 12:20:51 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:22.917 12:20:51 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:22.917 12:20:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.917 12:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.917 12:20:51 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:22.917 12:20:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:22.917 12:20:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.917 12:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.917 12:20:51 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:22.917 12:20:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:22.917 12:20:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:23.176 [2024-07-12 12:20:52.012550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:23.176 12:20:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.176 12:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:23.176 12:20:52 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:23.176 12:20:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:23.434 12:20:52 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:23.434 12:20:52 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:23.434 12:20:52 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:23.434 12:20:52 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:23.434 12:20:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.434 12:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:09:23.744 12:20:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.744 12:20:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:09:23.744 12:20:52 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:23.744 12:20:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:23.744 MallocForNvmf0 00:09:24.001 12:20:52 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:24.001 12:20:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:24.259 MallocForNvmf1 00:09:24.259 12:20:53 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:24.259 12:20:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:24.517 [2024-07-12 12:20:53.402071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.517 12:20:53 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.517 12:20:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.774 12:20:53 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:24.774 12:20:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:25.032 12:20:53 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:25.032 12:20:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:25.291 12:20:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:25.291 12:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:25.291 [2024-07-12 12:20:54.346567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:25.291 12:20:54 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:09:25.291 12:20:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.291 12:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.550 12:20:54 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:25.550 12:20:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.550 12:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.550 12:20:54 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:25.550 12:20:54 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:25.550 12:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:25.807 MallocBdevForConfigChangeCheck 00:09:25.807 12:20:54 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:25.807 12:20:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.807 12:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.807 12:20:54 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:25.807 12:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:26.065 INFO: shutting down applications... 00:09:26.065 12:20:55 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:26.065 12:20:55 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:26.065 12:20:55 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:26.065 12:20:55 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:26.065 12:20:55 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:26.324 Calling clear_iscsi_subsystem 00:09:26.324 Calling clear_nvmf_subsystem 00:09:26.324 Calling clear_nbd_subsystem 00:09:26.324 Calling clear_ublk_subsystem 00:09:26.324 Calling clear_vhost_blk_subsystem 00:09:26.324 Calling clear_vhost_scsi_subsystem 00:09:26.324 Calling clear_bdev_subsystem 00:09:26.324 12:20:55 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:26.324 12:20:55 json_config -- json_config/json_config.sh@343 -- # count=100 00:09:26.324 12:20:55 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:09:26.324 12:20:55 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:26.324 12:20:55 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:26.324 12:20:55 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:26.890 12:20:55 json_config -- json_config/json_config.sh@345 -- # break 00:09:26.890 12:20:55 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:09:26.890 12:20:55 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:09:26.890 12:20:55 json_config -- json_config/common.sh@31 -- # local app=target 00:09:26.890 12:20:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:26.890 12:20:55 json_config -- json_config/common.sh@35 -- # [[ -n 71455 ]] 00:09:26.890 12:20:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 71455 00:09:26.890 12:20:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:26.890 12:20:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:26.890 12:20:55 json_config -- json_config/common.sh@41 -- # kill -0 71455 00:09:26.890 12:20:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:27.455 12:20:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:27.455 12:20:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:27.456 12:20:56 json_config -- json_config/common.sh@41 -- # kill -0 71455 00:09:27.456 SPDK target shutdown done 00:09:27.456 INFO: relaunching applications... 00:09:27.456 12:20:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:27.456 12:20:56 json_config -- json_config/common.sh@43 -- # break 00:09:27.456 12:20:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:27.456 12:20:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:27.456 12:20:56 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:09:27.456 12:20:56 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:27.456 12:20:56 json_config -- json_config/common.sh@9 -- # local app=target 00:09:27.456 12:20:56 json_config -- json_config/common.sh@10 -- # shift 00:09:27.456 12:20:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:27.456 12:20:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:27.456 12:20:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:27.456 12:20:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:27.456 12:20:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:27.456 12:20:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=71640 00:09:27.456 12:20:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:27.456 12:20:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:27.456 Waiting for target to run... 00:09:27.456 12:20:56 json_config -- json_config/common.sh@25 -- # waitforlisten 71640 /var/tmp/spdk_tgt.sock 00:09:27.456 12:20:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 71640 ']' 00:09:27.456 12:20:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:27.456 12:20:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.456 12:20:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:27.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:27.456 12:20:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.456 12:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.456 [2024-07-12 12:20:56.337958] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:27.456 [2024-07-12 12:20:56.338082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71640 ] 00:09:27.714 [2024-07-12 12:20:56.761896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.972 [2024-07-12 12:20:56.825420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.972 [2024-07-12 12:20:56.951432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.270 [2024-07-12 12:20:57.158269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.270 [2024-07-12 12:20:57.190350] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:28.270 00:09:28.270 INFO: Checking if target configuration is the same... 00:09:28.270 12:20:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.270 12:20:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:28.270 12:20:57 json_config -- json_config/common.sh@26 -- # echo '' 00:09:28.270 12:20:57 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:09:28.270 12:20:57 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:28.270 12:20:57 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.270 12:20:57 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:09:28.270 12:20:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:28.270 + '[' 2 -ne 2 ']' 00:09:28.270 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:28.270 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:28.270 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.270 +++ basename /dev/fd/62 00:09:28.270 ++ mktemp /tmp/62.XXX 00:09:28.270 + tmp_file_1=/tmp/62.tbd 00:09:28.270 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.270 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:28.270 + tmp_file_2=/tmp/spdk_tgt_config.json.grT 00:09:28.270 + ret=0 00:09:28.270 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:28.836 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:28.836 + diff -u /tmp/62.tbd /tmp/spdk_tgt_config.json.grT 00:09:28.836 INFO: JSON config files are the same 00:09:28.836 + echo 'INFO: JSON config files are the same' 00:09:28.836 + rm /tmp/62.tbd /tmp/spdk_tgt_config.json.grT 00:09:28.836 + exit 0 00:09:28.836 INFO: changing configuration and checking if this can be detected... 00:09:28.836 12:20:57 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:09:28.836 12:20:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:28.836 12:20:57 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:28.836 12:20:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:29.094 12:20:57 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:29.094 12:20:57 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:09:29.094 12:20:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:29.094 + '[' 2 -ne 2 ']' 00:09:29.094 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:29.094 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:29.094 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:29.094 +++ basename /dev/fd/62 00:09:29.094 ++ mktemp /tmp/62.XXX 00:09:29.094 + tmp_file_1=/tmp/62.IGR 00:09:29.094 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:29.094 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:29.094 + tmp_file_2=/tmp/spdk_tgt_config.json.lJe 00:09:29.094 + ret=0 00:09:29.094 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:29.352 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:29.611 + diff -u /tmp/62.IGR /tmp/spdk_tgt_config.json.lJe 00:09:29.611 + ret=1 00:09:29.611 + echo '=== Start of file: /tmp/62.IGR ===' 00:09:29.611 + cat /tmp/62.IGR 00:09:29.611 + echo '=== End of file: /tmp/62.IGR ===' 00:09:29.611 + echo '' 00:09:29.611 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lJe ===' 00:09:29.611 + cat /tmp/spdk_tgt_config.json.lJe 00:09:29.611 + echo '=== End of file: /tmp/spdk_tgt_config.json.lJe ===' 00:09:29.611 + echo '' 00:09:29.611 + rm /tmp/62.IGR /tmp/spdk_tgt_config.json.lJe 00:09:29.611 + exit 1 00:09:29.611 INFO: configuration change detected. 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@317 -- # [[ -n 71640 ]] 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@193 -- # uname -s 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.611 12:20:58 json_config -- json_config/json_config.sh@323 -- # killprocess 71640 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@948 -- # '[' -z 71640 ']' 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@952 -- # kill -0 71640 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@953 -- # uname 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71640 00:09:29.611 killing process with pid 71640 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71640' 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@967 -- # kill 71640 00:09:29.611 12:20:58 json_config -- common/autotest_common.sh@972 -- # wait 71640 00:09:29.869 12:20:58 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:29.869 12:20:58 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:09:29.869 12:20:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.869 12:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.869 INFO: Success 00:09:29.869 12:20:58 json_config -- json_config/json_config.sh@328 -- # return 0 00:09:29.869 12:20:58 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:09:29.869 00:09:29.869 real 0m8.297s 00:09:29.869 user 0m11.729s 00:09:29.869 sys 0m1.816s 00:09:29.869 ************************************ 00:09:29.869 END TEST json_config 00:09:29.869 ************************************ 00:09:29.869 12:20:58 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.869 12:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.869 12:20:58 -- common/autotest_common.sh@1142 -- # return 0 00:09:29.869 12:20:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:29.869 12:20:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:29.869 12:20:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.869 12:20:58 -- common/autotest_common.sh@10 -- # set +x 00:09:29.869 ************************************ 00:09:29.869 START TEST json_config_extra_key 00:09:29.869 ************************************ 00:09:29.869 12:20:58 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:29.869 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.869 12:20:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.128 12:20:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.128 12:20:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.128 12:20:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.128 12:20:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.128 12:20:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.128 12:20:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.128 12:20:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:30.128 12:20:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.128 12:20:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:30.128 INFO: launching applications... 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:30.128 12:20:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:30.128 Waiting for target to run... 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71786 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71786 /var/tmp/spdk_tgt.sock 00:09:30.128 12:20:58 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 71786 ']' 00:09:30.128 12:20:58 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:30.128 12:20:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:30.128 12:20:58 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:30.128 12:20:58 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:30.128 12:20:58 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.128 12:20:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:30.128 [2024-07-12 12:20:59.028942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:30.128 [2024-07-12 12:20:59.029712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71786 ] 00:09:30.386 [2024-07-12 12:20:59.465645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.644 [2024-07-12 12:20:59.534724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.644 [2024-07-12 12:20:59.555903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.211 12:21:00 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.211 12:21:00 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:31.211 00:09:31.211 12:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:31.211 INFO: shutting down applications... 00:09:31.211 12:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71786 ]] 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71786 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71786 00:09:31.211 12:21:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71786 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:31.469 SPDK target shutdown done 00:09:31.469 Success 00:09:31.469 12:21:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:31.469 12:21:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:31.469 00:09:31.469 real 0m1.657s 00:09:31.469 user 0m1.581s 00:09:31.469 sys 0m0.450s 00:09:31.469 ************************************ 00:09:31.469 END TEST json_config_extra_key 00:09:31.469 ************************************ 00:09:31.469 12:21:00 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.469 12:21:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:31.728 12:21:00 -- common/autotest_common.sh@1142 -- # return 0 00:09:31.728 12:21:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:31.728 12:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.728 12:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.728 12:21:00 -- common/autotest_common.sh@10 -- # set +x 00:09:31.728 ************************************ 00:09:31.728 START TEST alias_rpc 00:09:31.728 ************************************ 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:31.728 * Looking for test storage... 00:09:31.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:31.728 12:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:31.728 12:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71851 00:09:31.728 12:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71851 00:09:31.728 12:21:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 71851 ']' 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.728 12:21:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.728 [2024-07-12 12:21:00.740661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:31.728 [2024-07-12 12:21:00.740829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71851 ] 00:09:31.987 [2024-07-12 12:21:00.873262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.987 [2024-07-12 12:21:00.964811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.987 [2024-07-12 12:21:01.023980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.983 12:21:01 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.983 12:21:01 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:32.983 12:21:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:32.983 12:21:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71851 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 71851 ']' 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 71851 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71851 00:09:32.983 killing process with pid 71851 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71851' 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@967 -- # kill 71851 00:09:32.983 12:21:02 alias_rpc -- common/autotest_common.sh@972 -- # wait 71851 00:09:33.549 ************************************ 00:09:33.549 END TEST alias_rpc 00:09:33.549 ************************************ 00:09:33.549 00:09:33.549 real 0m1.837s 00:09:33.549 user 0m2.099s 00:09:33.549 sys 0m0.455s 00:09:33.549 12:21:02 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.549 12:21:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.549 12:21:02 -- common/autotest_common.sh@1142 -- # return 0 00:09:33.549 12:21:02 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:33.549 12:21:02 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:33.549 12:21:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.549 12:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.549 12:21:02 -- common/autotest_common.sh@10 -- # set +x 00:09:33.549 ************************************ 00:09:33.549 START TEST spdkcli_tcp 00:09:33.549 ************************************ 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:33.549 * Looking for test storage... 00:09:33.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71921 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71921 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 71921 ']' 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.549 12:21:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.549 12:21:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.808 [2024-07-12 12:21:02.651605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:33.808 [2024-07-12 12:21:02.651699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71921 ] 00:09:33.808 [2024-07-12 12:21:02.788675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.808 [2024-07-12 12:21:02.877978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.808 [2024-07-12 12:21:02.877986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.065 [2024-07-12 12:21:02.937378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.629 12:21:03 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.629 12:21:03 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:09:34.629 12:21:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:34.629 12:21:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71938 00:09:34.629 12:21:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:34.887 [ 00:09:34.887 "bdev_malloc_delete", 00:09:34.887 "bdev_malloc_create", 00:09:34.887 "bdev_null_resize", 00:09:34.887 "bdev_null_delete", 00:09:34.887 "bdev_null_create", 00:09:34.887 "bdev_nvme_cuse_unregister", 00:09:34.887 "bdev_nvme_cuse_register", 00:09:34.887 "bdev_opal_new_user", 00:09:34.887 "bdev_opal_set_lock_state", 00:09:34.887 "bdev_opal_delete", 00:09:34.887 "bdev_opal_get_info", 00:09:34.887 "bdev_opal_create", 00:09:34.887 "bdev_nvme_opal_revert", 00:09:34.887 "bdev_nvme_opal_init", 00:09:34.887 "bdev_nvme_send_cmd", 00:09:34.887 "bdev_nvme_get_path_iostat", 00:09:34.887 "bdev_nvme_get_mdns_discovery_info", 00:09:34.887 "bdev_nvme_stop_mdns_discovery", 00:09:34.887 "bdev_nvme_start_mdns_discovery", 00:09:34.887 "bdev_nvme_set_multipath_policy", 00:09:34.888 "bdev_nvme_set_preferred_path", 00:09:34.888 "bdev_nvme_get_io_paths", 00:09:34.888 "bdev_nvme_remove_error_injection", 00:09:34.888 "bdev_nvme_add_error_injection", 00:09:34.888 "bdev_nvme_get_discovery_info", 00:09:34.888 "bdev_nvme_stop_discovery", 00:09:34.888 "bdev_nvme_start_discovery", 00:09:34.888 "bdev_nvme_get_controller_health_info", 00:09:34.888 "bdev_nvme_disable_controller", 00:09:34.888 "bdev_nvme_enable_controller", 00:09:34.888 "bdev_nvme_reset_controller", 00:09:34.888 "bdev_nvme_get_transport_statistics", 00:09:34.888 "bdev_nvme_apply_firmware", 00:09:34.888 "bdev_nvme_detach_controller", 00:09:34.888 "bdev_nvme_get_controllers", 00:09:34.888 "bdev_nvme_attach_controller", 00:09:34.888 "bdev_nvme_set_hotplug", 00:09:34.888 "bdev_nvme_set_options", 00:09:34.888 "bdev_passthru_delete", 00:09:34.888 "bdev_passthru_create", 00:09:34.888 "bdev_lvol_set_parent_bdev", 00:09:34.888 "bdev_lvol_set_parent", 00:09:34.888 "bdev_lvol_check_shallow_copy", 00:09:34.888 "bdev_lvol_start_shallow_copy", 00:09:34.888 "bdev_lvol_grow_lvstore", 00:09:34.888 "bdev_lvol_get_lvols", 00:09:34.888 "bdev_lvol_get_lvstores", 00:09:34.888 "bdev_lvol_delete", 00:09:34.888 "bdev_lvol_set_read_only", 00:09:34.888 "bdev_lvol_resize", 00:09:34.888 "bdev_lvol_decouple_parent", 00:09:34.888 "bdev_lvol_inflate", 00:09:34.888 "bdev_lvol_rename", 00:09:34.888 "bdev_lvol_clone_bdev", 00:09:34.888 "bdev_lvol_clone", 00:09:34.888 "bdev_lvol_snapshot", 00:09:34.888 "bdev_lvol_create", 00:09:34.888 "bdev_lvol_delete_lvstore", 00:09:34.888 "bdev_lvol_rename_lvstore", 00:09:34.888 "bdev_lvol_create_lvstore", 00:09:34.888 "bdev_raid_set_options", 00:09:34.888 "bdev_raid_remove_base_bdev", 00:09:34.888 "bdev_raid_add_base_bdev", 00:09:34.888 "bdev_raid_delete", 00:09:34.888 "bdev_raid_create", 00:09:34.888 "bdev_raid_get_bdevs", 00:09:34.888 "bdev_error_inject_error", 00:09:34.888 "bdev_error_delete", 00:09:34.888 "bdev_error_create", 00:09:34.888 "bdev_split_delete", 00:09:34.888 "bdev_split_create", 00:09:34.888 "bdev_delay_delete", 00:09:34.888 "bdev_delay_create", 00:09:34.888 "bdev_delay_update_latency", 00:09:34.888 "bdev_zone_block_delete", 00:09:34.888 "bdev_zone_block_create", 00:09:34.888 "blobfs_create", 00:09:34.888 "blobfs_detect", 00:09:34.888 "blobfs_set_cache_size", 00:09:34.888 "bdev_aio_delete", 00:09:34.888 "bdev_aio_rescan", 00:09:34.888 "bdev_aio_create", 00:09:34.888 "bdev_ftl_set_property", 00:09:34.888 "bdev_ftl_get_properties", 00:09:34.888 "bdev_ftl_get_stats", 00:09:34.888 "bdev_ftl_unmap", 00:09:34.888 "bdev_ftl_unload", 00:09:34.888 "bdev_ftl_delete", 00:09:34.888 "bdev_ftl_load", 00:09:34.888 "bdev_ftl_create", 00:09:34.888 "bdev_virtio_attach_controller", 00:09:34.888 "bdev_virtio_scsi_get_devices", 00:09:34.888 "bdev_virtio_detach_controller", 00:09:34.888 "bdev_virtio_blk_set_hotplug", 00:09:34.888 "bdev_iscsi_delete", 00:09:34.888 "bdev_iscsi_create", 00:09:34.888 "bdev_iscsi_set_options", 00:09:34.888 "bdev_uring_delete", 00:09:34.888 "bdev_uring_rescan", 00:09:34.888 "bdev_uring_create", 00:09:34.888 "accel_error_inject_error", 00:09:34.888 "ioat_scan_accel_module", 00:09:34.888 "dsa_scan_accel_module", 00:09:34.888 "iaa_scan_accel_module", 00:09:34.888 "keyring_file_remove_key", 00:09:34.888 "keyring_file_add_key", 00:09:34.888 "keyring_linux_set_options", 00:09:34.888 "iscsi_get_histogram", 00:09:34.888 "iscsi_enable_histogram", 00:09:34.888 "iscsi_set_options", 00:09:34.888 "iscsi_get_auth_groups", 00:09:34.888 "iscsi_auth_group_remove_secret", 00:09:34.888 "iscsi_auth_group_add_secret", 00:09:34.888 "iscsi_delete_auth_group", 00:09:34.888 "iscsi_create_auth_group", 00:09:34.888 "iscsi_set_discovery_auth", 00:09:34.888 "iscsi_get_options", 00:09:34.888 "iscsi_target_node_request_logout", 00:09:34.888 "iscsi_target_node_set_redirect", 00:09:34.888 "iscsi_target_node_set_auth", 00:09:34.888 "iscsi_target_node_add_lun", 00:09:34.888 "iscsi_get_stats", 00:09:34.888 "iscsi_get_connections", 00:09:34.888 "iscsi_portal_group_set_auth", 00:09:34.888 "iscsi_start_portal_group", 00:09:34.888 "iscsi_delete_portal_group", 00:09:34.888 "iscsi_create_portal_group", 00:09:34.888 "iscsi_get_portal_groups", 00:09:34.888 "iscsi_delete_target_node", 00:09:34.888 "iscsi_target_node_remove_pg_ig_maps", 00:09:34.888 "iscsi_target_node_add_pg_ig_maps", 00:09:34.888 "iscsi_create_target_node", 00:09:34.888 "iscsi_get_target_nodes", 00:09:34.888 "iscsi_delete_initiator_group", 00:09:34.888 "iscsi_initiator_group_remove_initiators", 00:09:34.888 "iscsi_initiator_group_add_initiators", 00:09:34.888 "iscsi_create_initiator_group", 00:09:34.888 "iscsi_get_initiator_groups", 00:09:34.888 "nvmf_set_crdt", 00:09:34.888 "nvmf_set_config", 00:09:34.888 "nvmf_set_max_subsystems", 00:09:34.888 "nvmf_stop_mdns_prr", 00:09:34.888 "nvmf_publish_mdns_prr", 00:09:34.888 "nvmf_subsystem_get_listeners", 00:09:34.888 "nvmf_subsystem_get_qpairs", 00:09:34.888 "nvmf_subsystem_get_controllers", 00:09:34.888 "nvmf_get_stats", 00:09:34.888 "nvmf_get_transports", 00:09:34.888 "nvmf_create_transport", 00:09:34.888 "nvmf_get_targets", 00:09:34.888 "nvmf_delete_target", 00:09:34.888 "nvmf_create_target", 00:09:34.888 "nvmf_subsystem_allow_any_host", 00:09:34.888 "nvmf_subsystem_remove_host", 00:09:34.888 "nvmf_subsystem_add_host", 00:09:34.888 "nvmf_ns_remove_host", 00:09:34.888 "nvmf_ns_add_host", 00:09:34.888 "nvmf_subsystem_remove_ns", 00:09:34.888 "nvmf_subsystem_add_ns", 00:09:34.888 "nvmf_subsystem_listener_set_ana_state", 00:09:34.888 "nvmf_discovery_get_referrals", 00:09:34.888 "nvmf_discovery_remove_referral", 00:09:34.888 "nvmf_discovery_add_referral", 00:09:34.888 "nvmf_subsystem_remove_listener", 00:09:34.888 "nvmf_subsystem_add_listener", 00:09:34.888 "nvmf_delete_subsystem", 00:09:34.888 "nvmf_create_subsystem", 00:09:34.888 "nvmf_get_subsystems", 00:09:34.888 "env_dpdk_get_mem_stats", 00:09:34.888 "nbd_get_disks", 00:09:34.888 "nbd_stop_disk", 00:09:34.888 "nbd_start_disk", 00:09:34.888 "ublk_recover_disk", 00:09:34.888 "ublk_get_disks", 00:09:34.888 "ublk_stop_disk", 00:09:34.888 "ublk_start_disk", 00:09:34.888 "ublk_destroy_target", 00:09:34.888 "ublk_create_target", 00:09:34.888 "virtio_blk_create_transport", 00:09:34.888 "virtio_blk_get_transports", 00:09:34.888 "vhost_controller_set_coalescing", 00:09:34.888 "vhost_get_controllers", 00:09:34.888 "vhost_delete_controller", 00:09:34.888 "vhost_create_blk_controller", 00:09:34.888 "vhost_scsi_controller_remove_target", 00:09:34.888 "vhost_scsi_controller_add_target", 00:09:34.888 "vhost_start_scsi_controller", 00:09:34.888 "vhost_create_scsi_controller", 00:09:34.888 "thread_set_cpumask", 00:09:34.888 "framework_get_governor", 00:09:34.888 "framework_get_scheduler", 00:09:34.888 "framework_set_scheduler", 00:09:34.888 "framework_get_reactors", 00:09:34.888 "thread_get_io_channels", 00:09:34.888 "thread_get_pollers", 00:09:34.888 "thread_get_stats", 00:09:34.888 "framework_monitor_context_switch", 00:09:34.888 "spdk_kill_instance", 00:09:34.888 "log_enable_timestamps", 00:09:34.888 "log_get_flags", 00:09:34.888 "log_clear_flag", 00:09:34.888 "log_set_flag", 00:09:34.888 "log_get_level", 00:09:34.888 "log_set_level", 00:09:34.888 "log_get_print_level", 00:09:34.888 "log_set_print_level", 00:09:34.888 "framework_enable_cpumask_locks", 00:09:34.888 "framework_disable_cpumask_locks", 00:09:34.888 "framework_wait_init", 00:09:34.888 "framework_start_init", 00:09:34.888 "scsi_get_devices", 00:09:34.888 "bdev_get_histogram", 00:09:34.888 "bdev_enable_histogram", 00:09:34.888 "bdev_set_qos_limit", 00:09:34.888 "bdev_set_qd_sampling_period", 00:09:34.888 "bdev_get_bdevs", 00:09:34.888 "bdev_reset_iostat", 00:09:34.888 "bdev_get_iostat", 00:09:34.888 "bdev_examine", 00:09:34.888 "bdev_wait_for_examine", 00:09:34.888 "bdev_set_options", 00:09:34.888 "notify_get_notifications", 00:09:34.888 "notify_get_types", 00:09:34.888 "accel_get_stats", 00:09:34.888 "accel_set_options", 00:09:34.888 "accel_set_driver", 00:09:34.888 "accel_crypto_key_destroy", 00:09:34.888 "accel_crypto_keys_get", 00:09:34.888 "accel_crypto_key_create", 00:09:34.888 "accel_assign_opc", 00:09:34.888 "accel_get_module_info", 00:09:34.888 "accel_get_opc_assignments", 00:09:34.888 "vmd_rescan", 00:09:34.888 "vmd_remove_device", 00:09:34.888 "vmd_enable", 00:09:34.888 "sock_get_default_impl", 00:09:34.888 "sock_set_default_impl", 00:09:34.888 "sock_impl_set_options", 00:09:34.888 "sock_impl_get_options", 00:09:34.888 "iobuf_get_stats", 00:09:34.888 "iobuf_set_options", 00:09:34.888 "framework_get_pci_devices", 00:09:34.888 "framework_get_config", 00:09:34.888 "framework_get_subsystems", 00:09:34.888 "trace_get_info", 00:09:34.888 "trace_get_tpoint_group_mask", 00:09:34.888 "trace_disable_tpoint_group", 00:09:34.888 "trace_enable_tpoint_group", 00:09:34.889 "trace_clear_tpoint_mask", 00:09:34.889 "trace_set_tpoint_mask", 00:09:34.889 "keyring_get_keys", 00:09:34.889 "spdk_get_version", 00:09:34.889 "rpc_get_methods" 00:09:34.889 ] 00:09:34.889 12:21:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.889 12:21:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:34.889 12:21:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71921 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 71921 ']' 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 71921 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71921 00:09:34.889 killing process with pid 71921 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71921' 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 71921 00:09:34.889 12:21:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 71921 00:09:35.454 ************************************ 00:09:35.454 END TEST spdkcli_tcp 00:09:35.454 ************************************ 00:09:35.454 00:09:35.454 real 0m1.813s 00:09:35.454 user 0m3.389s 00:09:35.454 sys 0m0.466s 00:09:35.454 12:21:04 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.454 12:21:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.454 12:21:04 -- common/autotest_common.sh@1142 -- # return 0 00:09:35.454 12:21:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:35.454 12:21:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:35.454 12:21:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.454 12:21:04 -- common/autotest_common.sh@10 -- # set +x 00:09:35.454 ************************************ 00:09:35.454 START TEST dpdk_mem_utility 00:09:35.454 ************************************ 00:09:35.454 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:35.454 * Looking for test storage... 00:09:35.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:35.454 12:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:35.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.454 12:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72012 00:09:35.454 12:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72012 00:09:35.454 12:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:35.454 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 72012 ']' 00:09:35.454 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.454 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.454 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.455 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.455 12:21:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:35.455 [2024-07-12 12:21:04.504868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:35.455 [2024-07-12 12:21:04.504968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72012 ] 00:09:35.712 [2024-07-12 12:21:04.639305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.712 [2024-07-12 12:21:04.727173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.712 [2024-07-12 12:21:04.783973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.645 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.645 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:09:36.645 12:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:36.645 12:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:36.645 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.645 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:36.645 { 00:09:36.645 "filename": "/tmp/spdk_mem_dump.txt" 00:09:36.645 } 00:09:36.645 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.645 12:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:36.645 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:36.645 1 heaps totaling size 814.000000 MiB 00:09:36.645 size: 814.000000 MiB heap id: 0 00:09:36.645 end heaps---------- 00:09:36.645 8 mempools totaling size 598.116089 MiB 00:09:36.645 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:36.645 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:36.645 size: 84.521057 MiB name: bdev_io_72012 00:09:36.645 size: 51.011292 MiB name: evtpool_72012 00:09:36.645 size: 50.003479 MiB name: msgpool_72012 00:09:36.645 size: 21.763794 MiB name: PDU_Pool 00:09:36.645 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:36.645 size: 0.026123 MiB name: Session_Pool 00:09:36.645 end mempools------- 00:09:36.645 6 memzones totaling size 4.142822 MiB 00:09:36.645 size: 1.000366 MiB name: RG_ring_0_72012 00:09:36.645 size: 1.000366 MiB name: RG_ring_1_72012 00:09:36.645 size: 1.000366 MiB name: RG_ring_4_72012 00:09:36.645 size: 1.000366 MiB name: RG_ring_5_72012 00:09:36.645 size: 0.125366 MiB name: RG_ring_2_72012 00:09:36.645 size: 0.015991 MiB name: RG_ring_3_72012 00:09:36.645 end memzones------- 00:09:36.645 12:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:36.645 heap id: 0 total size: 814.000000 MiB number of busy elements: 304 number of free elements: 15 00:09:36.645 list of free elements. size: 12.471191 MiB 00:09:36.645 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:36.645 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:36.645 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:36.645 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:36.645 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:36.645 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:36.645 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:36.645 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:36.645 element at address: 0x200000200000 with size: 0.833191 MiB 00:09:36.645 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:09:36.645 element at address: 0x20000b200000 with size: 0.488892 MiB 00:09:36.645 element at address: 0x200000800000 with size: 0.486328 MiB 00:09:36.645 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:36.645 element at address: 0x200027e00000 with size: 0.396118 MiB 00:09:36.645 element at address: 0x200003a00000 with size: 0.347839 MiB 00:09:36.645 list of standard malloc elements. size: 199.266235 MiB 00:09:36.645 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:36.645 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:36.645 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:36.645 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:36.645 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:36.645 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:36.645 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:36.645 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:36.645 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:36.645 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:36.645 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087c800 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087c980 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59180 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59240 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59300 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59480 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59540 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59600 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59780 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59840 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59900 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:36.646 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e65680 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e65740 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c340 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:36.646 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:36.646 list of memzone associated elements. size: 602.262573 MiB 00:09:36.646 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:36.646 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:36.647 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:36.647 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:36.647 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:36.647 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_72012_0 00:09:36.647 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:36.647 associated memzone info: size: 48.002930 MiB name: MP_evtpool_72012_0 00:09:36.647 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:36.647 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72012_0 00:09:36.647 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:36.647 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:36.647 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:36.647 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:36.647 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:36.647 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_72012 00:09:36.647 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:36.647 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72012 00:09:36.647 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:36.647 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72012 00:09:36.647 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:36.647 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:36.647 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:36.647 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:36.647 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:36.647 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:36.647 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:36.647 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:36.647 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:36.647 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72012 00:09:36.647 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:36.647 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72012 00:09:36.647 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:36.647 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72012 00:09:36.647 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:36.647 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72012 00:09:36.647 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:36.647 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72012 00:09:36.647 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:36.647 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:36.647 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:36.647 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:36.647 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:36.647 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:36.647 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:36.647 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72012 00:09:36.647 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:36.647 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:36.647 element at address: 0x200027e65800 with size: 0.023743 MiB 00:09:36.647 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:36.647 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:36.647 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72012 00:09:36.647 element at address: 0x200027e6b940 with size: 0.002441 MiB 00:09:36.647 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:36.647 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:09:36.647 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72012 00:09:36.647 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:36.647 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72012 00:09:36.647 element at address: 0x200027e6c400 with size: 0.000305 MiB 00:09:36.647 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:36.647 12:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:36.647 12:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72012 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 72012 ']' 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 72012 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72012 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72012' 00:09:36.647 killing process with pid 72012 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 72012 00:09:36.647 12:21:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 72012 00:09:37.213 00:09:37.213 real 0m1.688s 00:09:37.213 user 0m1.814s 00:09:37.213 sys 0m0.418s 00:09:37.213 12:21:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.213 12:21:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:37.213 ************************************ 00:09:37.213 END TEST dpdk_mem_utility 00:09:37.213 ************************************ 00:09:37.213 12:21:06 -- common/autotest_common.sh@1142 -- # return 0 00:09:37.213 12:21:06 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:37.213 12:21:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:37.213 12:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.213 12:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:37.213 ************************************ 00:09:37.213 START TEST event 00:09:37.213 ************************************ 00:09:37.213 12:21:06 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:37.213 * Looking for test storage... 00:09:37.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:37.213 12:21:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:37.213 12:21:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:37.213 12:21:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:37.213 12:21:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:37.213 12:21:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.213 12:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:09:37.213 ************************************ 00:09:37.213 START TEST event_perf 00:09:37.213 ************************************ 00:09:37.213 12:21:06 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:37.213 Running I/O for 1 seconds...[2024-07-12 12:21:06.197936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:37.213 [2024-07-12 12:21:06.198035] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72089 ] 00:09:37.471 [2024-07-12 12:21:06.338140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.471 [2024-07-12 12:21:06.441624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.471 [2024-07-12 12:21:06.441744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.471 [2024-07-12 12:21:06.441837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.471 Running I/O for 1 seconds...[2024-07-12 12:21:06.442137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.842 00:09:38.842 lcore 0: 205042 00:09:38.842 lcore 1: 205041 00:09:38.842 lcore 2: 205043 00:09:38.842 lcore 3: 205041 00:09:38.842 done. 00:09:38.842 00:09:38.842 real 0m1.337s 00:09:38.842 user 0m4.147s 00:09:38.842 sys 0m0.070s 00:09:38.842 12:21:07 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.842 12:21:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:38.842 ************************************ 00:09:38.842 END TEST event_perf 00:09:38.842 ************************************ 00:09:38.842 12:21:07 event -- common/autotest_common.sh@1142 -- # return 0 00:09:38.842 12:21:07 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:38.842 12:21:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:38.842 12:21:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.842 12:21:07 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.842 ************************************ 00:09:38.842 START TEST event_reactor 00:09:38.842 ************************************ 00:09:38.842 12:21:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:38.842 [2024-07-12 12:21:07.590035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:38.842 [2024-07-12 12:21:07.590337] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72122 ] 00:09:38.842 [2024-07-12 12:21:07.728887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.842 [2024-07-12 12:21:07.821780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.215 test_start 00:09:40.215 oneshot 00:09:40.215 tick 100 00:09:40.215 tick 100 00:09:40.215 tick 250 00:09:40.215 tick 100 00:09:40.215 tick 100 00:09:40.215 tick 250 00:09:40.215 tick 100 00:09:40.215 tick 500 00:09:40.215 tick 100 00:09:40.215 tick 100 00:09:40.215 tick 250 00:09:40.215 tick 100 00:09:40.215 tick 100 00:09:40.215 test_end 00:09:40.215 00:09:40.215 real 0m1.325s 00:09:40.215 user 0m1.155s 00:09:40.215 sys 0m0.062s 00:09:40.215 12:21:08 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.215 ************************************ 00:09:40.215 END TEST event_reactor 00:09:40.215 ************************************ 00:09:40.215 12:21:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:40.215 12:21:08 event -- common/autotest_common.sh@1142 -- # return 0 00:09:40.215 12:21:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:40.215 12:21:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:40.215 12:21:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.215 12:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:40.215 ************************************ 00:09:40.215 START TEST event_reactor_perf 00:09:40.215 ************************************ 00:09:40.215 12:21:08 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:40.215 [2024-07-12 12:21:08.970077] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:40.215 [2024-07-12 12:21:08.970179] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72158 ] 00:09:40.215 [2024-07-12 12:21:09.101516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.215 [2024-07-12 12:21:09.203702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.589 test_start 00:09:41.589 test_end 00:09:41.589 Performance: 385412 events per second 00:09:41.589 ************************************ 00:09:41.589 END TEST event_reactor_perf 00:09:41.589 ************************************ 00:09:41.589 00:09:41.589 real 0m1.322s 00:09:41.589 user 0m1.158s 00:09:41.589 sys 0m0.057s 00:09:41.589 12:21:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.589 12:21:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:41.589 12:21:10 event -- common/autotest_common.sh@1142 -- # return 0 00:09:41.589 12:21:10 event -- event/event.sh@49 -- # uname -s 00:09:41.590 12:21:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:41.590 12:21:10 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:41.590 12:21:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:41.590 12:21:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.590 12:21:10 event -- common/autotest_common.sh@10 -- # set +x 00:09:41.590 ************************************ 00:09:41.590 START TEST event_scheduler 00:09:41.590 ************************************ 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:41.590 * Looking for test storage... 00:09:41.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:41.590 12:21:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:41.590 12:21:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72219 00:09:41.590 12:21:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:41.590 12:21:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:41.590 12:21:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72219 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 72219 ']' 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.590 12:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:41.590 [2024-07-12 12:21:10.447722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:41.590 [2024-07-12 12:21:10.447857] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72219 ] 00:09:41.590 [2024-07-12 12:21:10.585696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.590 [2024-07-12 12:21:10.666500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.590 [2024-07-12 12:21:10.666547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.590 [2024-07-12 12:21:10.666688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.590 [2024-07-12 12:21:10.666682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.526 12:21:11 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.526 12:21:11 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:09:42.526 12:21:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:42.526 12:21:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.526 12:21:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.526 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.526 POWER: Cannot set governor of lcore 0 to userspace 00:09:42.526 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.526 POWER: Cannot set governor of lcore 0 to performance 00:09:42.526 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:42.526 POWER: Cannot set governor of lcore 0 to userspace 00:09:42.526 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:42.526 POWER: Unable to set Power Management Environment for lcore 0 00:09:42.526 [2024-07-12 12:21:11.486672] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:42.526 [2024-07-12 12:21:11.486890] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:42.526 [2024-07-12 12:21:11.487126] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:42.526 [2024-07-12 12:21:11.487324] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:42.527 [2024-07-12 12:21:11.487541] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:42.527 [2024-07-12 12:21:11.487640] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.527 12:21:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.527 [2024-07-12 12:21:11.545842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.527 [2024-07-12 12:21:11.573366] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.527 12:21:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.527 12:21:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:42.527 ************************************ 00:09:42.527 START TEST scheduler_create_thread 00:09:42.527 ************************************ 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.527 2 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.527 3 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.527 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 4 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 5 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 6 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 7 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 8 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 9 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 10 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.785 12:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.351 ************************************ 00:09:43.351 END TEST scheduler_create_thread 00:09:43.351 ************************************ 00:09:43.351 12:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.351 00:09:43.351 real 0m0.589s 00:09:43.351 user 0m0.014s 00:09:43.351 sys 0m0.004s 00:09:43.351 12:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.351 12:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:09:43.351 12:21:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:43.351 12:21:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72219 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 72219 ']' 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 72219 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72219 00:09:43.351 killing process with pid 72219 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72219' 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 72219 00:09:43.351 12:21:12 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 72219 00:09:43.608 [2024-07-12 12:21:12.655711] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:43.866 ************************************ 00:09:43.866 END TEST event_scheduler 00:09:43.866 ************************************ 00:09:43.866 00:09:43.866 real 0m2.535s 00:09:43.866 user 0m5.406s 00:09:43.866 sys 0m0.346s 00:09:43.866 12:21:12 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.867 12:21:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:43.867 12:21:12 event -- common/autotest_common.sh@1142 -- # return 0 00:09:43.867 12:21:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:43.867 12:21:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:43.867 12:21:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:43.867 12:21:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.867 12:21:12 event -- common/autotest_common.sh@10 -- # set +x 00:09:43.867 ************************************ 00:09:43.867 START TEST app_repeat 00:09:43.867 ************************************ 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:43.867 Process app_repeat pid: 72297 00:09:43.867 spdk_app_start Round 0 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72297 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72297' 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:43.867 12:21:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72297 /var/tmp/spdk-nbd.sock 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72297 ']' 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.867 12:21:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:43.867 [2024-07-12 12:21:12.948623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:43.867 [2024-07-12 12:21:12.948743] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72297 ] 00:09:44.123 [2024-07-12 12:21:13.092278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.123 [2024-07-12 12:21:13.195966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.123 [2024-07-12 12:21:13.195981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.379 [2024-07-12 12:21:13.255623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:44.945 12:21:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:44.945 12:21:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:44.945 12:21:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.203 Malloc0 00:09:45.203 12:21:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.461 Malloc1 00:09:45.461 12:21:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.461 12:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:45.720 /dev/nbd0 00:09:45.720 12:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:45.720 12:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.720 1+0 records in 00:09:45.720 1+0 records out 00:09:45.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531859 s, 7.7 MB/s 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:45.720 12:21:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:45.720 12:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.720 12:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.720 12:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:45.978 /dev/nbd1 00:09:45.978 12:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:45.978 12:21:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.978 1+0 records in 00:09:45.978 1+0 records out 00:09:45.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720935 s, 5.7 MB/s 00:09:45.978 12:21:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.257 12:21:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:46.257 12:21:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.257 12:21:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:46.257 12:21:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:46.257 { 00:09:46.257 "nbd_device": "/dev/nbd0", 00:09:46.257 "bdev_name": "Malloc0" 00:09:46.257 }, 00:09:46.257 { 00:09:46.257 "nbd_device": "/dev/nbd1", 00:09:46.257 "bdev_name": "Malloc1" 00:09:46.257 } 00:09:46.257 ]' 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.257 12:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:46.257 { 00:09:46.257 "nbd_device": "/dev/nbd0", 00:09:46.257 "bdev_name": "Malloc0" 00:09:46.257 }, 00:09:46.257 { 00:09:46.257 "nbd_device": "/dev/nbd1", 00:09:46.257 "bdev_name": "Malloc1" 00:09:46.257 } 00:09:46.257 ]' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:46.514 /dev/nbd1' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:46.514 /dev/nbd1' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:46.514 256+0 records in 00:09:46.514 256+0 records out 00:09:46.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00791395 s, 132 MB/s 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:46.514 256+0 records in 00:09:46.514 256+0 records out 00:09:46.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307336 s, 34.1 MB/s 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:46.514 256+0 records in 00:09:46.514 256+0 records out 00:09:46.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274067 s, 38.3 MB/s 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.514 12:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.515 12:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.771 12:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.032 12:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:47.309 12:21:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:47.309 12:21:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:47.888 12:21:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:47.888 [2024-07-12 12:21:16.889316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.145 [2024-07-12 12:21:16.976826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.145 [2024-07-12 12:21:16.976831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.145 [2024-07-12 12:21:17.030354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.145 [2024-07-12 12:21:17.030437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:48.145 [2024-07-12 12:21:17.030452] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:50.670 spdk_app_start Round 1 00:09:50.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.670 12:21:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:50.670 12:21:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:50.670 12:21:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72297 /var/tmp/spdk-nbd.sock 00:09:50.670 12:21:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72297 ']' 00:09:50.670 12:21:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.670 12:21:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.670 12:21:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.670 12:21:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.670 12:21:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:50.928 12:21:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.928 12:21:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:50.928 12:21:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.186 Malloc0 00:09:51.186 12:21:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.444 Malloc1 00:09:51.444 12:21:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.444 12:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:51.702 /dev/nbd0 00:09:51.702 12:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.702 12:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.702 1+0 records in 00:09:51.702 1+0 records out 00:09:51.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573198 s, 7.1 MB/s 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:51.702 12:21:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:51.702 12:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.702 12:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.702 12:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:51.960 /dev/nbd1 00:09:52.218 12:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.218 12:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.218 12:21:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:52.218 12:21:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:52.218 12:21:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:52.218 12:21:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.219 1+0 records in 00:09:52.219 1+0 records out 00:09:52.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578303 s, 7.1 MB/s 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:52.219 12:21:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:52.219 12:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.219 12:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.219 12:21:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.219 12:21:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.219 12:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:52.478 { 00:09:52.478 "nbd_device": "/dev/nbd0", 00:09:52.478 "bdev_name": "Malloc0" 00:09:52.478 }, 00:09:52.478 { 00:09:52.478 "nbd_device": "/dev/nbd1", 00:09:52.478 "bdev_name": "Malloc1" 00:09:52.478 } 00:09:52.478 ]' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:52.478 { 00:09:52.478 "nbd_device": "/dev/nbd0", 00:09:52.478 "bdev_name": "Malloc0" 00:09:52.478 }, 00:09:52.478 { 00:09:52.478 "nbd_device": "/dev/nbd1", 00:09:52.478 "bdev_name": "Malloc1" 00:09:52.478 } 00:09:52.478 ]' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:52.478 /dev/nbd1' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:52.478 /dev/nbd1' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:52.478 256+0 records in 00:09:52.478 256+0 records out 00:09:52.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075719 s, 138 MB/s 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:52.478 256+0 records in 00:09:52.478 256+0 records out 00:09:52.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222254 s, 47.2 MB/s 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:52.478 256+0 records in 00:09:52.478 256+0 records out 00:09:52.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258875 s, 40.5 MB/s 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.478 12:21:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.770 12:21:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.049 12:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.049 12:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.049 12:21:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.049 12:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.050 12:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.309 12:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:53.567 12:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.567 12:21:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.567 12:21:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:53.567 12:21:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:53.567 12:21:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:53.567 12:21:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:53.567 12:21:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:53.825 [2024-07-12 12:21:22.829391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.083 [2024-07-12 12:21:22.916123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.083 [2024-07-12 12:21:22.916134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.083 [2024-07-12 12:21:22.969882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.083 [2024-07-12 12:21:22.969987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.083 [2024-07-12 12:21:22.970001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:56.611 spdk_app_start Round 2 00:09:56.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:56.611 12:21:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:56.611 12:21:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:56.611 12:21:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72297 /var/tmp/spdk-nbd.sock 00:09:56.611 12:21:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72297 ']' 00:09:56.611 12:21:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:56.611 12:21:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.611 12:21:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:56.611 12:21:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.611 12:21:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:56.869 12:21:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.869 12:21:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:09:56.869 12:21:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.125 Malloc0 00:09:57.125 12:21:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.382 Malloc1 00:09:57.382 12:21:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.382 12:21:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.383 12:21:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:57.642 /dev/nbd0 00:09:57.642 12:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:57.642 12:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:57.642 1+0 records in 00:09:57.642 1+0 records out 00:09:57.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185524 s, 22.1 MB/s 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:57.642 12:21:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:57.642 12:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.642 12:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.642 12:21:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:57.913 /dev/nbd1 00:09:57.913 12:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:57.913 12:21:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:57.913 12:21:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:57.913 12:21:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:09:57.913 12:21:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:57.913 12:21:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:57.913 12:21:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.208 1+0 records in 00:09:58.208 1+0 records out 00:09:58.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658919 s, 6.2 MB/s 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:58.208 12:21:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:09:58.208 12:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.208 12:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.208 12:21:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:58.208 12:21:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.208 12:21:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.208 12:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:58.208 { 00:09:58.208 "nbd_device": "/dev/nbd0", 00:09:58.208 "bdev_name": "Malloc0" 00:09:58.208 }, 00:09:58.208 { 00:09:58.208 "nbd_device": "/dev/nbd1", 00:09:58.208 "bdev_name": "Malloc1" 00:09:58.208 } 00:09:58.208 ]' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:58.492 { 00:09:58.492 "nbd_device": "/dev/nbd0", 00:09:58.492 "bdev_name": "Malloc0" 00:09:58.492 }, 00:09:58.492 { 00:09:58.492 "nbd_device": "/dev/nbd1", 00:09:58.492 "bdev_name": "Malloc1" 00:09:58.492 } 00:09:58.492 ]' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:58.492 /dev/nbd1' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:58.492 /dev/nbd1' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:58.492 256+0 records in 00:09:58.492 256+0 records out 00:09:58.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102578 s, 102 MB/s 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:58.492 256+0 records in 00:09:58.492 256+0 records out 00:09:58.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309389 s, 33.9 MB/s 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:58.492 256+0 records in 00:09:58.492 256+0 records out 00:09:58.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238675 s, 43.9 MB/s 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.492 12:21:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.750 12:21:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.009 12:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.267 12:21:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.267 12:21:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:59.832 12:21:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:59.832 [2024-07-12 12:21:28.798052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:59.832 [2024-07-12 12:21:28.895076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.832 [2024-07-12 12:21:28.895092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.090 [2024-07-12 12:21:28.949552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.090 [2024-07-12 12:21:28.949653] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:00.090 [2024-07-12 12:21:28.949668] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:02.623 12:21:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72297 /var/tmp/spdk-nbd.sock 00:10:02.623 12:21:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 72297 ']' 00:10:02.623 12:21:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:02.623 12:21:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:02.623 12:21:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:02.623 12:21:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.623 12:21:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:02.880 12:21:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.880 12:21:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:02.881 12:21:31 event.app_repeat -- event/event.sh@39 -- # killprocess 72297 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 72297 ']' 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 72297 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72297 00:10:02.881 killing process with pid 72297 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72297' 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@967 -- # kill 72297 00:10:02.881 12:21:31 event.app_repeat -- common/autotest_common.sh@972 -- # wait 72297 00:10:03.139 spdk_app_start is called in Round 0. 00:10:03.139 Shutdown signal received, stop current app iteration 00:10:03.139 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:10:03.139 spdk_app_start is called in Round 1. 00:10:03.139 Shutdown signal received, stop current app iteration 00:10:03.139 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:10:03.139 spdk_app_start is called in Round 2. 00:10:03.139 Shutdown signal received, stop current app iteration 00:10:03.139 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:10:03.139 spdk_app_start is called in Round 3. 00:10:03.139 Shutdown signal received, stop current app iteration 00:10:03.139 12:21:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:03.139 12:21:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:03.139 00:10:03.139 real 0m19.209s 00:10:03.139 user 0m43.136s 00:10:03.139 sys 0m2.953s 00:10:03.139 12:21:32 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.139 12:21:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:03.139 ************************************ 00:10:03.139 END TEST app_repeat 00:10:03.139 ************************************ 00:10:03.139 12:21:32 event -- common/autotest_common.sh@1142 -- # return 0 00:10:03.139 12:21:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:03.139 12:21:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:03.139 12:21:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:03.139 12:21:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.139 12:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:10:03.139 ************************************ 00:10:03.139 START TEST cpu_locks 00:10:03.139 ************************************ 00:10:03.139 12:21:32 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:03.400 * Looking for test storage... 00:10:03.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:03.400 12:21:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:03.400 12:21:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:03.400 12:21:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:03.400 12:21:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:03.400 12:21:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:03.400 12:21:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.400 12:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 ************************************ 00:10:03.400 START TEST default_locks 00:10:03.400 ************************************ 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72735 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72735 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 72735 ']' 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:03.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:03.400 12:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 [2024-07-12 12:21:32.332896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:03.400 [2024-07-12 12:21:32.332988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72735 ] 00:10:03.400 [2024-07-12 12:21:32.464016] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.667 [2024-07-12 12:21:32.553419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.667 [2024-07-12 12:21:32.606651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.229 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:04.229 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:10:04.229 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72735 00:10:04.229 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72735 00:10:04.229 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72735 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 72735 ']' 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 72735 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72735 00:10:04.793 killing process with pid 72735 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72735' 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 72735 00:10:04.793 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 72735 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72735 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 72735 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:05.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.051 ERROR: process (pid: 72735) is no longer running 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 72735 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 72735 ']' 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.051 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (72735) - No such process 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:05.051 00:10:05.051 real 0m1.709s 00:10:05.051 user 0m1.801s 00:10:05.051 sys 0m0.521s 00:10:05.051 ************************************ 00:10:05.051 END TEST default_locks 00:10:05.051 ************************************ 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.051 12:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.051 12:21:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:05.051 12:21:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:05.051 12:21:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:05.051 12:21:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.051 12:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.051 ************************************ 00:10:05.051 START TEST default_locks_via_rpc 00:10:05.051 ************************************ 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72776 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72776 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 72776 ']' 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.051 12:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.051 [2024-07-12 12:21:34.094391] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:05.051 [2024-07-12 12:21:34.095005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72776 ] 00:10:05.308 [2024-07-12 12:21:34.233432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.309 [2024-07-12 12:21:34.326632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.309 [2024-07-12 12:21:34.379994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72776 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72776 00:10:06.242 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72776 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 72776 ']' 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 72776 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72776 00:10:06.500 killing process with pid 72776 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72776' 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 72776 00:10:06.500 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 72776 00:10:06.757 00:10:06.757 real 0m1.785s 00:10:06.757 user 0m1.911s 00:10:06.757 sys 0m0.544s 00:10:06.757 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.757 12:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.757 ************************************ 00:10:06.757 END TEST default_locks_via_rpc 00:10:06.757 ************************************ 00:10:07.015 12:21:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:07.015 12:21:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:07.015 12:21:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:07.015 12:21:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.015 12:21:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:07.015 ************************************ 00:10:07.015 START TEST non_locking_app_on_locked_coremask 00:10:07.015 ************************************ 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:10:07.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72827 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72827 /var/tmp/spdk.sock 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72827 ']' 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.015 12:21:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.015 [2024-07-12 12:21:35.929305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:07.015 [2024-07-12 12:21:35.929401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:10:07.015 [2024-07-12 12:21:36.066523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.273 [2024-07-12 12:21:36.154661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.273 [2024-07-12 12:21:36.207941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:07.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:07.836 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72843 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72843 /var/tmp/spdk2.sock 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72843 ']' 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.837 12:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.094 [2024-07-12 12:21:36.956890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:08.094 [2024-07-12 12:21:36.957304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72843 ] 00:10:08.094 [2024-07-12 12:21:37.091189] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:08.094 [2024-07-12 12:21:37.091250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.353 [2024-07-12 12:21:37.229028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.353 [2024-07-12 12:21:37.341193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:08.929 12:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.929 12:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:08.929 12:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72827 00:10:08.929 12:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72827 00:10:08.929 12:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72827 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72827 ']' 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72827 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72827 00:10:09.495 killing process with pid 72827 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72827' 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72827 00:10:09.495 12:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72827 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72843 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72843 ']' 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72843 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72843 00:10:10.430 killing process with pid 72843 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72843' 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72843 00:10:10.430 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72843 00:10:10.688 ************************************ 00:10:10.688 END TEST non_locking_app_on_locked_coremask 00:10:10.688 ************************************ 00:10:10.688 00:10:10.688 real 0m3.846s 00:10:10.688 user 0m4.227s 00:10:10.688 sys 0m1.034s 00:10:10.688 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.688 12:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 12:21:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:10.688 12:21:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:10.688 12:21:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:10.688 12:21:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.688 12:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 ************************************ 00:10:10.688 START TEST locking_app_on_unlocked_coremask 00:10:10.688 ************************************ 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:10:10.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72912 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72912 /var/tmp/spdk.sock 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72912 ']' 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.688 12:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.947 [2024-07-12 12:21:39.824449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:10.947 [2024-07-12 12:21:39.824757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72912 ] 00:10:10.947 [2024-07-12 12:21:39.958197] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:10.947 [2024-07-12 12:21:39.958594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.205 [2024-07-12 12:21:40.055816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.205 [2024-07-12 12:21:40.110883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72928 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72928 /var/tmp/spdk2.sock 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72928 ']' 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.772 12:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.772 [2024-07-12 12:21:40.840897] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:11.772 [2024-07-12 12:21:40.840993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72928 ] 00:10:12.030 [2024-07-12 12:21:40.984552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.289 [2024-07-12 12:21:41.160672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.289 [2024-07-12 12:21:41.276261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:12.872 12:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.872 12:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:12.872 12:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72928 00:10:12.872 12:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72928 00:10:12.872 12:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72912 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72912 ']' 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 72912 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72912 00:10:13.830 killing process with pid 72912 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72912' 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 72912 00:10:13.830 12:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 72912 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72928 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72928 ']' 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 72928 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72928 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:14.397 killing process with pid 72928 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72928' 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 72928 00:10:14.397 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 72928 00:10:14.963 ************************************ 00:10:14.963 END TEST locking_app_on_unlocked_coremask 00:10:14.963 ************************************ 00:10:14.963 00:10:14.963 real 0m4.045s 00:10:14.963 user 0m4.465s 00:10:14.963 sys 0m1.134s 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.963 12:21:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:14.963 12:21:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:14.963 12:21:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:14.963 12:21:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.963 12:21:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.963 ************************************ 00:10:14.963 START TEST locking_app_on_locked_coremask 00:10:14.963 ************************************ 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:10:14.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72995 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72995 /var/tmp/spdk.sock 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 72995 ']' 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.963 12:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.963 [2024-07-12 12:21:43.918048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:14.963 [2024-07-12 12:21:43.918149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:10:15.221 [2024-07-12 12:21:44.049285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.221 [2024-07-12 12:21:44.132225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.221 [2024-07-12 12:21:44.186728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73011 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73011 /var/tmp/spdk2.sock 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73011 /var/tmp/spdk2.sock 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 73011 /var/tmp/spdk2.sock 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 73011 ']' 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.156 12:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.156 [2024-07-12 12:21:44.950980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:16.156 [2024-07-12 12:21:44.951311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73011 ] 00:10:16.156 [2024-07-12 12:21:45.092797] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72995 has claimed it. 00:10:16.156 [2024-07-12 12:21:45.096906] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:16.724 ERROR: process (pid: 73011) is no longer running 00:10:16.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (73011) - No such process 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72995 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72995 00:10:16.724 12:21:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72995 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 72995 ']' 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 72995 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72995 00:10:16.983 killing process with pid 72995 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72995' 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 72995 00:10:16.983 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 72995 00:10:17.591 00:10:17.591 real 0m2.550s 00:10:17.591 user 0m2.944s 00:10:17.591 sys 0m0.613s 00:10:17.591 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.591 12:21:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:17.591 ************************************ 00:10:17.591 END TEST locking_app_on_locked_coremask 00:10:17.591 ************************************ 00:10:17.591 12:21:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:17.591 12:21:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:17.591 12:21:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:17.591 12:21:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.591 12:21:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:17.591 ************************************ 00:10:17.591 START TEST locking_overlapped_coremask 00:10:17.591 ************************************ 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73051 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73051 /var/tmp/spdk.sock 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 73051 ']' 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:17.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.591 12:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:17.591 [2024-07-12 12:21:46.537452] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:17.591 [2024-07-12 12:21:46.537556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73051 ] 00:10:17.849 [2024-07-12 12:21:46.675030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.849 [2024-07-12 12:21:46.758530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.849 [2024-07-12 12:21:46.758649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.849 [2024-07-12 12:21:46.758655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.849 [2024-07-12 12:21:46.813716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:18.781 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73075 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73075 /var/tmp/spdk2.sock 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73075 /var/tmp/spdk2.sock 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 73075 /var/tmp/spdk2.sock 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 73075 ']' 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:18.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.782 12:21:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:18.782 [2024-07-12 12:21:47.585638] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:18.782 [2024-07-12 12:21:47.585739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73075 ] 00:10:18.782 [2024-07-12 12:21:47.728383] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73051 has claimed it. 00:10:18.782 [2024-07-12 12:21:47.728478] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:19.350 ERROR: process (pid: 73075) is no longer running 00:10:19.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (73075) - No such process 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73051 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 73051 ']' 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 73051 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73051 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73051' 00:10:19.350 killing process with pid 73051 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 73051 00:10:19.350 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 73051 00:10:19.916 00:10:19.916 real 0m2.258s 00:10:19.916 user 0m6.343s 00:10:19.916 sys 0m0.443s 00:10:19.916 ************************************ 00:10:19.916 END TEST locking_overlapped_coremask 00:10:19.916 ************************************ 00:10:19.916 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.916 12:21:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.916 12:21:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:19.916 12:21:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:19.916 12:21:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:19.916 12:21:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.916 12:21:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:19.916 ************************************ 00:10:19.916 START TEST locking_overlapped_coremask_via_rpc 00:10:19.916 ************************************ 00:10:19.916 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:10:19.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.916 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73115 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73115 /var/tmp/spdk.sock 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73115 ']' 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.917 12:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.917 [2024-07-12 12:21:48.850941] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:19.917 [2024-07-12 12:21:48.851033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73115 ] 00:10:19.917 [2024-07-12 12:21:48.988584] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:19.917 [2024-07-12 12:21:48.988629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.175 [2024-07-12 12:21:49.072476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.175 [2024-07-12 12:21:49.072597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.175 [2024-07-12 12:21:49.072617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.175 [2024-07-12 12:21:49.126464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73133 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73133 /var/tmp/spdk2.sock 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73133 ']' 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.144 12:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.144 [2024-07-12 12:21:49.886553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:21.144 [2024-07-12 12:21:49.886887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73133 ] 00:10:21.144 [2024-07-12 12:21:50.029349] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:21.144 [2024-07-12 12:21:50.029432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.426 [2024-07-12 12:21:50.220847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.426 [2024-07-12 12:21:50.220959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:21.426 [2024-07-12 12:21:50.220961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.426 [2024-07-12 12:21:50.331434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 [2024-07-12 12:21:50.892923] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73115 has claimed it. 00:10:21.991 request: 00:10:21.991 { 00:10:21.991 "method": "framework_enable_cpumask_locks", 00:10:21.991 "req_id": 1 00:10:21.991 } 00:10:21.991 Got JSON-RPC error response 00:10:21.991 response: 00:10:21.991 { 00:10:21.991 "code": -32603, 00:10:21.991 "message": "Failed to claim CPU core: 2" 00:10:21.991 } 00:10:21.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73115 /var/tmp/spdk.sock 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73115 ']' 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.991 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.992 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.992 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.992 12:21:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73133 /var/tmp/spdk2.sock 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 73133 ']' 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:22.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.249 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:22.507 00:10:22.507 real 0m2.637s 00:10:22.507 user 0m1.356s 00:10:22.507 sys 0m0.204s 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.507 ************************************ 00:10:22.507 END TEST locking_overlapped_coremask_via_rpc 00:10:22.507 ************************************ 00:10:22.507 12:21:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:22.507 12:21:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:22.507 12:21:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73115 ]] 00:10:22.507 12:21:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73115 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73115 ']' 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73115 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73115 00:10:22.507 killing process with pid 73115 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73115' 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 73115 00:10:22.507 12:21:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 73115 00:10:23.073 12:21:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73133 ]] 00:10:23.073 12:21:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73133 00:10:23.073 12:21:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73133 ']' 00:10:23.073 12:21:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73133 00:10:23.073 12:21:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:10:23.073 12:21:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.073 12:21:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73133 00:10:23.073 killing process with pid 73133 00:10:23.073 12:21:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:23.074 12:21:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:23.074 12:21:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73133' 00:10:23.074 12:21:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 73133 00:10:23.074 12:21:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 73133 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:23.332 Process with pid 73115 is not found 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73115 ]] 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73115 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73115 ']' 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73115 00:10:23.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (73115) - No such process 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 73115 is not found' 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73133 ]] 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73133 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 73133 ']' 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 73133 00:10:23.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (73133) - No such process 00:10:23.332 Process with pid 73133 is not found 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 73133 is not found' 00:10:23.332 12:21:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:23.332 00:10:23.332 real 0m20.079s 00:10:23.332 user 0m35.374s 00:10:23.332 sys 0m5.339s 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.332 ************************************ 00:10:23.332 END TEST cpu_locks 00:10:23.332 ************************************ 00:10:23.332 12:21:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:23.332 12:21:52 event -- common/autotest_common.sh@1142 -- # return 0 00:10:23.332 ************************************ 00:10:23.332 END TEST event 00:10:23.332 ************************************ 00:10:23.332 00:10:23.332 real 0m46.202s 00:10:23.332 user 1m30.509s 00:10:23.332 sys 0m9.060s 00:10:23.332 12:21:52 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.332 12:21:52 event -- common/autotest_common.sh@10 -- # set +x 00:10:23.332 12:21:52 -- common/autotest_common.sh@1142 -- # return 0 00:10:23.332 12:21:52 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:23.332 12:21:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:23.332 12:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.332 12:21:52 -- common/autotest_common.sh@10 -- # set +x 00:10:23.332 ************************************ 00:10:23.332 START TEST thread 00:10:23.332 ************************************ 00:10:23.332 12:21:52 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:23.591 * Looking for test storage... 00:10:23.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:23.591 12:21:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:23.591 12:21:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:23.591 12:21:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.591 12:21:52 thread -- common/autotest_common.sh@10 -- # set +x 00:10:23.591 ************************************ 00:10:23.591 START TEST thread_poller_perf 00:10:23.591 ************************************ 00:10:23.591 12:21:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:23.591 [2024-07-12 12:21:52.455510] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:23.591 [2024-07-12 12:21:52.455773] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73257 ] 00:10:23.591 [2024-07-12 12:21:52.592916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.849 [2024-07-12 12:21:52.676836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.849 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:24.783 ====================================== 00:10:24.783 busy:2212743822 (cyc) 00:10:24.783 total_run_count: 337000 00:10:24.783 tsc_hz: 2200000000 (cyc) 00:10:24.783 ====================================== 00:10:24.783 poller_cost: 6566 (cyc), 2984 (nsec) 00:10:24.783 00:10:24.783 real 0m1.322s 00:10:24.783 user 0m1.153s 00:10:24.783 sys 0m0.059s 00:10:24.783 12:21:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.783 12:21:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 ************************************ 00:10:24.783 END TEST thread_poller_perf 00:10:24.783 ************************************ 00:10:24.783 12:21:53 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:24.783 12:21:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:24.783 12:21:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:24.783 12:21:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.783 12:21:53 thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 ************************************ 00:10:24.783 START TEST thread_poller_perf 00:10:24.783 ************************************ 00:10:24.783 12:21:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:24.783 [2024-07-12 12:21:53.832689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:24.783 [2024-07-12 12:21:53.832774] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73293 ] 00:10:25.042 [2024-07-12 12:21:53.975760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.042 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:25.042 [2024-07-12 12:21:54.067479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.436 ====================================== 00:10:26.436 busy:2202507077 (cyc) 00:10:26.436 total_run_count: 4507000 00:10:26.436 tsc_hz: 2200000000 (cyc) 00:10:26.436 ====================================== 00:10:26.436 poller_cost: 488 (cyc), 221 (nsec) 00:10:26.436 00:10:26.436 real 0m1.314s 00:10:26.436 user 0m1.150s 00:10:26.436 sys 0m0.056s 00:10:26.436 12:21:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.436 ************************************ 00:10:26.436 END TEST thread_poller_perf 00:10:26.436 ************************************ 00:10:26.436 12:21:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:26.436 12:21:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:10:26.436 12:21:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:26.436 ************************************ 00:10:26.436 END TEST thread 00:10:26.436 ************************************ 00:10:26.436 00:10:26.436 real 0m2.825s 00:10:26.436 user 0m2.364s 00:10:26.436 sys 0m0.236s 00:10:26.436 12:21:55 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.436 12:21:55 thread -- common/autotest_common.sh@10 -- # set +x 00:10:26.436 12:21:55 -- common/autotest_common.sh@1142 -- # return 0 00:10:26.436 12:21:55 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:26.436 12:21:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.436 12:21:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.436 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:10:26.436 ************************************ 00:10:26.436 START TEST accel 00:10:26.436 ************************************ 00:10:26.436 12:21:55 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:26.436 * Looking for test storage... 00:10:26.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:26.436 12:21:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:10:26.436 12:21:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:10:26.436 12:21:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:26.436 12:21:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=73367 00:10:26.436 12:21:55 accel -- accel/accel.sh@63 -- # waitforlisten 73367 00:10:26.436 12:21:55 accel -- common/autotest_common.sh@829 -- # '[' -z 73367 ']' 00:10:26.436 12:21:55 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.436 12:21:55 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.436 12:21:55 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:26.436 12:21:55 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.436 12:21:55 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.437 12:21:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:26.437 12:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:10:26.437 12:21:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.437 12:21:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.437 12:21:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.437 12:21:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.437 12:21:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.437 12:21:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:26.437 12:21:55 accel -- accel/accel.sh@41 -- # jq -r . 00:10:26.437 [2024-07-12 12:21:55.366820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:26.437 [2024-07-12 12:21:55.367137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:10:26.437 [2024-07-12 12:21:55.500696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.695 [2024-07-12 12:21:55.595353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.695 [2024-07-12 12:21:55.651918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:27.631 12:21:56 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.631 12:21:56 accel -- common/autotest_common.sh@862 -- # return 0 00:10:27.631 12:21:56 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:27.631 12:21:56 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:27.631 12:21:56 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:27.631 12:21:56 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:10:27.631 12:21:56 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:27.631 12:21:56 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:27.631 12:21:56 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:27.631 12:21:56 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.631 12:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.631 12:21:56 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.631 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.631 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.631 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # IFS== 00:10:27.632 12:21:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:27.632 12:21:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:27.632 12:21:56 accel -- accel/accel.sh@75 -- # killprocess 73367 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@948 -- # '[' -z 73367 ']' 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@952 -- # kill -0 73367 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@953 -- # uname 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73367 00:10:27.632 killing process with pid 73367 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73367' 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@967 -- # kill 73367 00:10:27.632 12:21:56 accel -- common/autotest_common.sh@972 -- # wait 73367 00:10:27.890 12:21:56 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:27.890 12:21:56 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.890 12:21:56 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:10:27.890 12:21:56 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:10:27.890 12:21:56 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.890 12:21:56 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:27.890 12:21:56 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.890 12:21:56 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.890 ************************************ 00:10:27.890 START TEST accel_missing_filename 00:10:27.890 ************************************ 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.890 12:21:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.890 12:21:56 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:10:27.891 12:21:56 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:10:28.152 [2024-07-12 12:21:56.982388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:28.152 [2024-07-12 12:21:56.982475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73419 ] 00:10:28.152 [2024-07-12 12:21:57.115693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.152 [2024-07-12 12:21:57.197910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.413 [2024-07-12 12:21:57.254841] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.413 [2024-07-12 12:21:57.331197] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:10:28.413 A filename is required. 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:28.413 00:10:28.413 real 0m0.438s 00:10:28.413 user 0m0.278s 00:10:28.413 sys 0m0.106s 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.413 12:21:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:10:28.413 ************************************ 00:10:28.413 END TEST accel_missing_filename 00:10:28.413 ************************************ 00:10:28.413 12:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:28.413 12:21:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:28.413 12:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:28.413 12:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.413 12:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:10:28.413 ************************************ 00:10:28.413 START TEST accel_compress_verify 00:10:28.413 ************************************ 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.413 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:28.413 12:21:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:10:28.413 [2024-07-12 12:21:57.476433] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:28.413 [2024-07-12 12:21:57.476540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73443 ] 00:10:28.671 [2024-07-12 12:21:57.613254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.671 [2024-07-12 12:21:57.695304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.671 [2024-07-12 12:21:57.752988] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.930 [2024-07-12 12:21:57.829332] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:10:28.930 00:10:28.930 Compression does not support the verify option, aborting. 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:10:28.930 ************************************ 00:10:28.930 END TEST accel_compress_verify 00:10:28.930 ************************************ 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:28.930 00:10:28.930 real 0m0.450s 00:10:28.930 user 0m0.281s 00:10:28.930 sys 0m0.117s 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.930 12:21:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 12:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:28.930 12:21:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:28.930 12:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:28.930 12:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.930 12:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:10:28.930 ************************************ 00:10:28.930 START TEST accel_wrong_workload 00:10:28.930 ************************************ 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.930 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:10:28.930 12:21:57 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:10:28.930 Unsupported workload type: foobar 00:10:28.930 [2024-07-12 12:21:57.976710] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:28.930 accel_perf options: 00:10:28.930 [-h help message] 00:10:28.930 [-q queue depth per core] 00:10:28.930 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:28.930 [-T number of threads per core 00:10:28.930 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:28.930 [-t time in seconds] 00:10:28.930 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:28.930 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:28.930 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:28.930 [-l for compress/decompress workloads, name of uncompressed input file 00:10:28.930 [-S for crc32c workload, use this seed value (default 0) 00:10:28.930 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:28.930 [-f for fill workload, use this BYTE value (default 255) 00:10:28.930 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:28.930 [-y verify result if this switch is on] 00:10:28.931 [-a tasks to allocate per core (default: same value as -q)] 00:10:28.931 Can be used to spread operations across a wider range of memory. 00:10:28.931 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:10:28.931 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:28.931 ************************************ 00:10:28.931 END TEST accel_wrong_workload 00:10:28.931 ************************************ 00:10:28.931 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:28.931 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:28.931 00:10:28.931 real 0m0.029s 00:10:28.931 user 0m0.014s 00:10:28.931 sys 0m0.015s 00:10:28.931 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.931 12:21:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:29.190 12:21:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.190 ************************************ 00:10:29.190 START TEST accel_negative_buffers 00:10:29.190 ************************************ 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:10:29.190 12:21:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:10:29.190 -x option must be non-negative. 00:10:29.190 [2024-07-12 12:21:58.063511] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:29.190 accel_perf options: 00:10:29.190 [-h help message] 00:10:29.190 [-q queue depth per core] 00:10:29.190 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:29.190 [-T number of threads per core 00:10:29.190 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:29.190 [-t time in seconds] 00:10:29.190 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:29.190 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:10:29.190 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:29.190 [-l for compress/decompress workloads, name of uncompressed input file 00:10:29.190 [-S for crc32c workload, use this seed value (default 0) 00:10:29.190 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:29.190 [-f for fill workload, use this BYTE value (default 255) 00:10:29.190 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:29.190 [-y verify result if this switch is on] 00:10:29.190 [-a tasks to allocate per core (default: same value as -q)] 00:10:29.190 Can be used to spread operations across a wider range of memory. 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:29.190 00:10:29.190 real 0m0.033s 00:10:29.190 user 0m0.022s 00:10:29.190 sys 0m0.011s 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.190 12:21:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:10:29.190 ************************************ 00:10:29.190 END TEST accel_negative_buffers 00:10:29.190 ************************************ 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:29.190 12:21:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.190 12:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.190 ************************************ 00:10:29.190 START TEST accel_crc32c 00:10:29.190 ************************************ 00:10:29.190 12:21:58 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:29.190 12:21:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:29.190 [2024-07-12 12:21:58.145547] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:29.190 [2024-07-12 12:21:58.145650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73502 ] 00:10:29.448 [2024-07-12 12:21:58.284829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.448 [2024-07-12 12:21:58.376737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:29.449 12:21:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:30.825 12:21:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.825 00:10:30.825 real 0m1.451s 00:10:30.825 user 0m1.246s 00:10:30.825 sys 0m0.113s 00:10:30.825 12:21:59 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.825 12:21:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:30.825 ************************************ 00:10:30.825 END TEST accel_crc32c 00:10:30.825 ************************************ 00:10:30.825 12:21:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:30.825 12:21:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:30.825 12:21:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:30.825 12:21:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.825 12:21:59 accel -- common/autotest_common.sh@10 -- # set +x 00:10:30.825 ************************************ 00:10:30.825 START TEST accel_crc32c_C2 00:10:30.825 ************************************ 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:30.825 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:30.825 [2024-07-12 12:21:59.655024] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:30.825 [2024-07-12 12:21:59.655125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73536 ] 00:10:30.825 [2024-07-12 12:21:59.793790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.825 [2024-07-12 12:21:59.876608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:31.084 12:21:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.020 00:10:32.020 real 0m1.466s 00:10:32.020 user 0m1.248s 00:10:32.020 sys 0m0.125s 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.020 ************************************ 00:10:32.020 END TEST accel_crc32c_C2 00:10:32.020 ************************************ 00:10:32.020 12:22:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:32.278 12:22:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:32.278 12:22:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:32.278 12:22:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:32.278 12:22:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.278 12:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.278 ************************************ 00:10:32.278 START TEST accel_copy 00:10:32.278 ************************************ 00:10:32.278 12:22:01 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:10:32.278 12:22:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:32.278 12:22:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:32.278 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.278 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.278 12:22:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:32.279 12:22:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:32.279 [2024-07-12 12:22:01.175613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:32.279 [2024-07-12 12:22:01.175722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73572 ] 00:10:32.279 [2024-07-12 12:22:01.310693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.538 [2024-07-12 12:22:01.386863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:32.538 12:22:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.938 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:33.939 12:22:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.939 00:10:33.939 real 0m1.453s 00:10:33.939 user 0m1.246s 00:10:33.939 sys 0m0.115s 00:10:33.939 12:22:02 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.939 12:22:02 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:33.939 ************************************ 00:10:33.939 END TEST accel_copy 00:10:33.939 ************************************ 00:10:33.939 12:22:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:33.939 12:22:02 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:33.939 12:22:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:33.939 12:22:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.939 12:22:02 accel -- common/autotest_common.sh@10 -- # set +x 00:10:33.939 ************************************ 00:10:33.939 START TEST accel_fill 00:10:33.939 ************************************ 00:10:33.939 12:22:02 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:33.939 [2024-07-12 12:22:02.685856] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:33.939 [2024-07-12 12:22:02.685959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73606 ] 00:10:33.939 [2024-07-12 12:22:02.825504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.939 [2024-07-12 12:22:02.924718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:33.939 12:22:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:35.314 12:22:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:35.315 12:22:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.315 00:10:35.315 real 0m1.479s 00:10:35.315 user 0m1.274s 00:10:35.315 sys 0m0.114s 00:10:35.315 12:22:04 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.315 12:22:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:35.315 ************************************ 00:10:35.315 END TEST accel_fill 00:10:35.315 ************************************ 00:10:35.315 12:22:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:35.315 12:22:04 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:35.315 12:22:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:35.315 12:22:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.315 12:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:10:35.315 ************************************ 00:10:35.315 START TEST accel_copy_crc32c 00:10:35.315 ************************************ 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:35.315 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:35.315 [2024-07-12 12:22:04.218443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:35.315 [2024-07-12 12:22:04.218564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73641 ] 00:10:35.315 [2024-07-12 12:22:04.356887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.573 [2024-07-12 12:22:04.428113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.573 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:35.574 12:22:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:36.949 00:10:36.949 real 0m1.449s 00:10:36.949 user 0m1.239s 00:10:36.949 sys 0m0.117s 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.949 12:22:05 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:36.949 ************************************ 00:10:36.949 END TEST accel_copy_crc32c 00:10:36.949 ************************************ 00:10:36.949 12:22:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:36.949 12:22:05 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:36.949 12:22:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:36.949 12:22:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.949 12:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:10:36.949 ************************************ 00:10:36.949 START TEST accel_copy_crc32c_C2 00:10:36.949 ************************************ 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:36.949 12:22:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:36.949 [2024-07-12 12:22:05.711619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:36.949 [2024-07-12 12:22:05.711697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73675 ] 00:10:36.949 [2024-07-12 12:22:05.845124] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.949 [2024-07-12 12:22:05.944781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.949 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:36.950 12:22:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.324 00:10:38.324 real 0m1.470s 00:10:38.324 user 0m1.257s 00:10:38.324 sys 0m0.121s 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.324 ************************************ 00:10:38.324 12:22:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:38.324 END TEST accel_copy_crc32c_C2 00:10:38.324 ************************************ 00:10:38.324 12:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:38.324 12:22:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:38.324 12:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:38.324 12:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.324 12:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:10:38.324 ************************************ 00:10:38.324 START TEST accel_dualcast 00:10:38.324 ************************************ 00:10:38.324 12:22:07 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:38.324 12:22:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:38.324 [2024-07-12 12:22:07.239922] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:38.324 [2024-07-12 12:22:07.240026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73710 ] 00:10:38.324 [2024-07-12 12:22:07.377039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.582 [2024-07-12 12:22:07.468130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:38.582 12:22:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:39.957 12:22:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.957 00:10:39.957 real 0m1.470s 00:10:39.957 user 0m1.260s 00:10:39.957 sys 0m0.117s 00:10:39.957 12:22:08 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.957 ************************************ 00:10:39.957 END TEST accel_dualcast 00:10:39.957 ************************************ 00:10:39.957 12:22:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 12:22:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:39.957 12:22:08 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:39.957 12:22:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:39.957 12:22:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.957 12:22:08 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 ************************************ 00:10:39.957 START TEST accel_compare 00:10:39.957 ************************************ 00:10:39.957 12:22:08 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:39.957 12:22:08 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:39.957 [2024-07-12 12:22:08.760228] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:39.957 [2024-07-12 12:22:08.760320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73739 ] 00:10:39.957 [2024-07-12 12:22:08.897193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.957 [2024-07-12 12:22:08.985063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:40.216 12:22:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 ************************************ 00:10:41.152 END TEST accel_compare 00:10:41.152 ************************************ 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:41.152 12:22:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.152 00:10:41.152 real 0m1.458s 00:10:41.152 user 0m1.238s 00:10:41.152 sys 0m0.128s 00:10:41.152 12:22:10 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.152 12:22:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:41.410 12:22:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:41.410 12:22:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:41.410 12:22:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:41.410 12:22:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.410 12:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:10:41.410 ************************************ 00:10:41.410 START TEST accel_xor 00:10:41.410 ************************************ 00:10:41.410 12:22:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:41.410 12:22:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:41.410 [2024-07-12 12:22:10.275360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:41.410 [2024-07-12 12:22:10.275487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73779 ] 00:10:41.410 [2024-07-12 12:22:10.412713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.668 [2024-07-12 12:22:10.505806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:41.668 12:22:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.044 12:22:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.044 12:22:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.044 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.044 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:43.045 00:10:43.045 real 0m1.469s 00:10:43.045 user 0m1.255s 00:10:43.045 sys 0m0.121s 00:10:43.045 12:22:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.045 12:22:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 ************************************ 00:10:43.045 END TEST accel_xor 00:10:43.045 ************************************ 00:10:43.045 12:22:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:43.045 12:22:11 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:43.045 12:22:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:43.045 12:22:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.045 12:22:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:43.045 ************************************ 00:10:43.045 START TEST accel_xor 00:10:43.045 ************************************ 00:10:43.045 12:22:11 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:43.045 12:22:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:43.045 [2024-07-12 12:22:11.802676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:43.045 [2024-07-12 12:22:11.802772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73808 ] 00:10:43.045 [2024-07-12 12:22:11.941314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.045 [2024-07-12 12:22:12.040428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:43.045 12:22:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:44.422 12:22:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.422 00:10:44.422 real 0m1.463s 00:10:44.422 user 0m1.253s 00:10:44.422 sys 0m0.114s 00:10:44.422 12:22:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.422 12:22:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:44.422 ************************************ 00:10:44.422 END TEST accel_xor 00:10:44.422 ************************************ 00:10:44.422 12:22:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:44.422 12:22:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:44.422 12:22:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:44.422 12:22:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.422 12:22:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:44.422 ************************************ 00:10:44.422 START TEST accel_dif_verify 00:10:44.422 ************************************ 00:10:44.422 12:22:13 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:44.422 12:22:13 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:44.422 [2024-07-12 12:22:13.316867] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:44.422 [2024-07-12 12:22:13.316953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73848 ] 00:10:44.422 [2024-07-12 12:22:13.453041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.681 [2024-07-12 12:22:13.528798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:44.681 12:22:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:46.058 12:22:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.058 00:10:46.058 real 0m1.438s 00:10:46.058 user 0m1.232s 00:10:46.058 sys 0m0.108s 00:10:46.058 ************************************ 00:10:46.058 END TEST accel_dif_verify 00:10:46.058 ************************************ 00:10:46.058 12:22:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.058 12:22:14 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:46.058 12:22:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:46.058 12:22:14 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:46.058 12:22:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:46.058 12:22:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.058 12:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.058 ************************************ 00:10:46.058 START TEST accel_dif_generate 00:10:46.058 ************************************ 00:10:46.058 12:22:14 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:46.058 12:22:14 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:46.058 [2024-07-12 12:22:14.813473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:46.058 [2024-07-12 12:22:14.813584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73877 ] 00:10:46.058 [2024-07-12 12:22:14.950421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.058 [2024-07-12 12:22:15.043820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.058 12:22:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:46.059 12:22:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:47.438 12:22:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.438 00:10:47.438 real 0m1.467s 00:10:47.438 user 0m1.256s 00:10:47.438 sys 0m0.120s 00:10:47.438 ************************************ 00:10:47.438 END TEST accel_dif_generate 00:10:47.438 ************************************ 00:10:47.438 12:22:16 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.438 12:22:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:47.438 12:22:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:47.438 12:22:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:47.438 12:22:16 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:47.438 12:22:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.438 12:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:47.438 ************************************ 00:10:47.438 START TEST accel_dif_generate_copy 00:10:47.438 ************************************ 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:47.438 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:47.438 [2024-07-12 12:22:16.329061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:47.438 [2024-07-12 12:22:16.329158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73917 ] 00:10:47.438 [2024-07-12 12:22:16.465423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.696 [2024-07-12 12:22:16.559478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.696 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.696 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.696 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.696 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.696 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.696 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:47.697 12:22:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 ************************************ 00:10:49.075 END TEST accel_dif_generate_copy 00:10:49.075 ************************************ 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.075 00:10:49.075 real 0m1.469s 00:10:49.075 user 0m1.269s 00:10:49.075 sys 0m0.107s 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.075 12:22:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:49.075 12:22:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.075 12:22:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:49.075 12:22:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.075 12:22:17 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:49.075 12:22:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.075 12:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.075 ************************************ 00:10:49.075 START TEST accel_comp 00:10:49.075 ************************************ 00:10:49.075 12:22:17 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:49.075 12:22:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:49.075 [2024-07-12 12:22:17.856684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:49.076 [2024-07-12 12:22:17.856811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73946 ] 00:10:49.076 [2024-07-12 12:22:17.992257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.076 [2024-07-12 12:22:18.055838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:49.076 12:22:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:50.454 12:22:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.454 00:10:50.454 real 0m1.439s 00:10:50.454 user 0m1.231s 00:10:50.454 sys 0m0.116s 00:10:50.454 12:22:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.454 12:22:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:50.454 ************************************ 00:10:50.454 END TEST accel_comp 00:10:50.454 ************************************ 00:10:50.454 12:22:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:50.454 12:22:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:50.454 12:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:50.454 12:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.454 12:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:10:50.454 ************************************ 00:10:50.454 START TEST accel_decomp 00:10:50.454 ************************************ 00:10:50.454 12:22:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:50.454 12:22:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:50.454 [2024-07-12 12:22:19.344355] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:50.455 [2024-07-12 12:22:19.344451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73985 ] 00:10:50.455 [2024-07-12 12:22:19.476591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.713 [2024-07-12 12:22:19.567927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:50.713 12:22:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:52.088 12:22:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.088 00:10:52.088 real 0m1.457s 00:10:52.088 user 0m1.251s 00:10:52.088 sys 0m0.114s 00:10:52.088 12:22:20 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.088 ************************************ 00:10:52.088 END TEST accel_decomp 00:10:52.088 ************************************ 00:10:52.088 12:22:20 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:52.088 12:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:52.088 12:22:20 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:52.088 12:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:52.088 12:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.088 12:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:10:52.088 ************************************ 00:10:52.088 START TEST accel_decomp_full 00:10:52.088 ************************************ 00:10:52.088 12:22:20 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:52.088 12:22:20 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:52.088 [2024-07-12 12:22:20.860942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:52.088 [2024-07-12 12:22:20.861028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74015 ] 00:10:52.088 [2024-07-12 12:22:20.999882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.088 [2024-07-12 12:22:21.087496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.088 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:52.089 12:22:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:53.465 12:22:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.465 00:10:53.465 real 0m1.479s 00:10:53.465 user 0m1.266s 00:10:53.465 sys 0m0.120s 00:10:53.465 12:22:22 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.465 12:22:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:53.465 ************************************ 00:10:53.465 END TEST accel_decomp_full 00:10:53.465 ************************************ 00:10:53.465 12:22:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:53.465 12:22:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.465 12:22:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:53.465 12:22:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.465 12:22:22 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.465 ************************************ 00:10:53.465 START TEST accel_decomp_mcore 00:10:53.465 ************************************ 00:10:53.465 12:22:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.465 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:53.465 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:53.465 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.465 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.465 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:53.466 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:53.466 [2024-07-12 12:22:22.396329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:53.466 [2024-07-12 12:22:22.396947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74050 ] 00:10:53.466 [2024-07-12 12:22:22.534600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.725 [2024-07-12 12:22:22.632900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.725 [2024-07-12 12:22:22.633058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.725 [2024-07-12 12:22:22.634071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.725 [2024-07-12 12:22:22.634082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.725 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:53.726 12:22:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:55.099 00:10:55.099 real 0m1.482s 00:10:55.099 user 0m4.660s 00:10:55.099 sys 0m0.140s 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.099 12:22:23 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:55.099 ************************************ 00:10:55.099 END TEST accel_decomp_mcore 00:10:55.099 ************************************ 00:10:55.099 12:22:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:55.099 12:22:23 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.099 12:22:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:55.099 12:22:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.099 12:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:10:55.099 ************************************ 00:10:55.099 START TEST accel_decomp_full_mcore 00:10:55.099 ************************************ 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:55.099 12:22:23 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:55.099 [2024-07-12 12:22:23.927279] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:55.099 [2024-07-12 12:22:23.927403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74087 ] 00:10:55.099 [2024-07-12 12:22:24.057712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.099 [2024-07-12 12:22:24.153877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.099 [2024-07-12 12:22:24.153990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.099 [2024-07-12 12:22:24.154101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.099 [2024-07-12 12:22:24.154102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:55.357 12:22:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.734 00:10:56.734 real 0m1.494s 00:10:56.734 user 0m4.716s 00:10:56.734 sys 0m0.119s 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.734 ************************************ 00:10:56.734 END TEST accel_decomp_full_mcore 00:10:56.734 ************************************ 00:10:56.734 12:22:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:56.734 12:22:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:56.734 12:22:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:56.734 12:22:25 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:56.734 12:22:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.734 12:22:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:56.734 ************************************ 00:10:56.734 START TEST accel_decomp_mthread 00:10:56.734 ************************************ 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:56.734 [2024-07-12 12:22:25.470716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:56.734 [2024-07-12 12:22:25.470880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74126 ] 00:10:56.734 [2024-07-12 12:22:25.608788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.734 [2024-07-12 12:22:25.680784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.734 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:56.735 12:22:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.111 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.112 00:10:58.112 real 0m1.444s 00:10:58.112 user 0m1.234s 00:10:58.112 sys 0m0.117s 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.112 12:22:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:58.112 ************************************ 00:10:58.112 END TEST accel_decomp_mthread 00:10:58.112 ************************************ 00:10:58.112 12:22:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:58.112 12:22:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:58.112 12:22:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:58.112 12:22:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.112 12:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:10:58.112 ************************************ 00:10:58.112 START TEST accel_decomp_full_mthread 00:10:58.112 ************************************ 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:58.112 12:22:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:58.112 [2024-07-12 12:22:26.979508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:58.112 [2024-07-12 12:22:26.980000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74161 ] 00:10:58.112 [2024-07-12 12:22:27.124267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.371 [2024-07-12 12:22:27.215125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:58.371 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:58.372 12:22:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.747 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 ************************************ 00:10:59.748 END TEST accel_decomp_full_mthread 00:10:59.748 ************************************ 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:59.748 00:10:59.748 real 0m1.495s 00:10:59.748 user 0m1.282s 00:10:59.748 sys 0m0.121s 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.748 12:22:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:59.748 12:22:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:59.748 12:22:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:59.748 12:22:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:59.748 12:22:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:59.748 12:22:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:59.748 12:22:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:59.748 12:22:28 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:59.748 12:22:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.748 12:22:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.748 12:22:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.748 12:22:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:59.748 12:22:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:59.748 12:22:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:59.748 12:22:28 accel -- accel/accel.sh@41 -- # jq -r . 00:10:59.748 ************************************ 00:10:59.748 START TEST accel_dif_functional_tests 00:10:59.748 ************************************ 00:10:59.748 12:22:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:59.748 [2024-07-12 12:22:28.552877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:59.748 [2024-07-12 12:22:28.553227] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74197 ] 00:10:59.748 [2024-07-12 12:22:28.692733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.748 [2024-07-12 12:22:28.796538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.748 [2024-07-12 12:22:28.796642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.748 [2024-07-12 12:22:28.796648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.006 [2024-07-12 12:22:28.852500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:00.006 00:11:00.006 00:11:00.006 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.006 http://cunit.sourceforge.net/ 00:11:00.006 00:11:00.006 00:11:00.006 Suite: accel_dif 00:11:00.006 Test: verify: DIF generated, GUARD check ...passed 00:11:00.006 Test: verify: DIF generated, APPTAG check ...passed 00:11:00.006 Test: verify: DIF generated, REFTAG check ...passed 00:11:00.006 Test: verify: DIF not generated, GUARD check ...[2024-07-12 12:22:28.889704] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:00.006 passed 00:11:00.006 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 12:22:28.890415] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:00.006 passed 00:11:00.006 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 12:22:28.890848] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:00.006 passed 00:11:00.006 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:00.006 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 12:22:28.891601] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:00.006 passed 00:11:00.006 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:00.006 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:00.006 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:00.006 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 12:22:28.892352] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:00.006 passed 00:11:00.006 Test: verify copy: DIF generated, GUARD check ...passed 00:11:00.006 Test: verify copy: DIF generated, APPTAG check ...passed 00:11:00.006 Test: verify copy: DIF generated, REFTAG check ...passed 00:11:00.006 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 12:22:28.892955] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:00.006 passed 00:11:00.006 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 12:22:28.893128] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:00.006 passed 00:11:00.006 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 12:22:28.893264] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:00.006 passed 00:11:00.006 Test: generate copy: DIF generated, GUARD check ...passed 00:11:00.006 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:00.006 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:00.006 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:00.006 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:00.006 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:00.006 Test: generate copy: iovecs-len validate ...passed 00:11:00.006 Test: generate copy: buffer alignment validate ...[2024-07-12 12:22:28.894002] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:00.006 passed 00:11:00.006 00:11:00.006 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.006 suites 1 1 n/a 0 0 00:11:00.006 tests 26 26 26 0 0 00:11:00.006 asserts 115 115 115 0 n/a 00:11:00.006 00:11:00.006 Elapsed time = 0.010 seconds 00:11:00.006 00:11:00.006 real 0m0.573s 00:11:00.006 user 0m0.776s 00:11:00.006 sys 0m0.151s 00:11:00.006 ************************************ 00:11:00.006 END TEST accel_dif_functional_tests 00:11:00.006 ************************************ 00:11:00.006 12:22:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.006 12:22:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:00.265 12:22:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:00.265 00:11:00.265 real 0m33.891s 00:11:00.265 user 0m35.617s 00:11:00.265 sys 0m4.008s 00:11:00.265 12:22:29 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.265 ************************************ 00:11:00.265 END TEST accel 00:11:00.265 ************************************ 00:11:00.265 12:22:29 accel -- common/autotest_common.sh@10 -- # set +x 00:11:00.265 12:22:29 -- common/autotest_common.sh@1142 -- # return 0 00:11:00.265 12:22:29 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:00.265 12:22:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:00.265 12:22:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.265 12:22:29 -- common/autotest_common.sh@10 -- # set +x 00:11:00.265 ************************************ 00:11:00.265 START TEST accel_rpc 00:11:00.265 ************************************ 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:00.265 * Looking for test storage... 00:11:00.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:00.265 12:22:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:00.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.265 12:22:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=74261 00:11:00.265 12:22:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 74261 00:11:00.265 12:22:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 74261 ']' 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.265 12:22:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.265 [2024-07-12 12:22:29.321523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:00.265 [2024-07-12 12:22:29.321919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74261 ] 00:11:00.523 [2024-07-12 12:22:29.455633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.523 [2024-07-12 12:22:29.548864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.467 12:22:30 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.467 12:22:30 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:01.467 12:22:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:01.467 12:22:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:01.467 12:22:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:01.467 12:22:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:01.467 12:22:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:01.467 12:22:30 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:01.467 12:22:30 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.467 12:22:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 ************************************ 00:11:01.467 START TEST accel_assign_opcode 00:11:01.467 ************************************ 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 [2024-07-12 12:22:30.353680] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 [2024-07-12 12:22:30.365687] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.467 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:01.467 [2024-07-12 12:22:30.428347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.726 software 00:11:01.726 00:11:01.726 real 0m0.289s 00:11:01.726 user 0m0.057s 00:11:01.726 sys 0m0.006s 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.726 12:22:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 ************************************ 00:11:01.726 END TEST accel_assign_opcode 00:11:01.726 ************************************ 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:11:01.726 12:22:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 74261 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 74261 ']' 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 74261 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74261 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:01.726 killing process with pid 74261 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74261' 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@967 -- # kill 74261 00:11:01.726 12:22:30 accel_rpc -- common/autotest_common.sh@972 -- # wait 74261 00:11:01.986 00:11:01.986 real 0m1.895s 00:11:01.986 user 0m2.029s 00:11:01.986 sys 0m0.455s 00:11:01.986 12:22:31 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.986 12:22:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.986 ************************************ 00:11:01.986 END TEST accel_rpc 00:11:01.986 ************************************ 00:11:02.245 12:22:31 -- common/autotest_common.sh@1142 -- # return 0 00:11:02.245 12:22:31 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:02.245 12:22:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:02.245 12:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.245 12:22:31 -- common/autotest_common.sh@10 -- # set +x 00:11:02.245 ************************************ 00:11:02.245 START TEST app_cmdline 00:11:02.245 ************************************ 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:02.245 * Looking for test storage... 00:11:02.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:02.245 12:22:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:02.245 12:22:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74354 00:11:02.245 12:22:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:02.245 12:22:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74354 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 74354 ']' 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.245 12:22:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:02.245 [2024-07-12 12:22:31.260256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:02.245 [2024-07-12 12:22:31.260374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74354 ] 00:11:02.504 [2024-07-12 12:22:31.400135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.504 [2024-07-12 12:22:31.488913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.504 [2024-07-12 12:22:31.544075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:03.439 { 00:11:03.439 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:11:03.439 "fields": { 00:11:03.439 "major": 24, 00:11:03.439 "minor": 9, 00:11:03.439 "patch": 0, 00:11:03.439 "suffix": "-pre", 00:11:03.439 "commit": "719d03c6a" 00:11:03.439 } 00:11:03.439 } 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:03.439 12:22:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:03.439 12:22:32 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:03.698 request: 00:11:03.698 { 00:11:03.698 "method": "env_dpdk_get_mem_stats", 00:11:03.698 "req_id": 1 00:11:03.698 } 00:11:03.698 Got JSON-RPC error response 00:11:03.698 response: 00:11:03.698 { 00:11:03.698 "code": -32601, 00:11:03.698 "message": "Method not found" 00:11:03.698 } 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.698 12:22:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74354 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 74354 ']' 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 74354 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.698 12:22:32 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74354 00:11:03.956 12:22:32 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.956 killing process with pid 74354 00:11:03.956 12:22:32 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.956 12:22:32 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74354' 00:11:03.956 12:22:32 app_cmdline -- common/autotest_common.sh@967 -- # kill 74354 00:11:03.956 12:22:32 app_cmdline -- common/autotest_common.sh@972 -- # wait 74354 00:11:04.214 00:11:04.214 real 0m2.065s 00:11:04.214 user 0m2.556s 00:11:04.214 sys 0m0.471s 00:11:04.214 12:22:33 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.214 12:22:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 ************************************ 00:11:04.214 END TEST app_cmdline 00:11:04.214 ************************************ 00:11:04.214 12:22:33 -- common/autotest_common.sh@1142 -- # return 0 00:11:04.214 12:22:33 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:04.214 12:22:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.214 12:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.214 12:22:33 -- common/autotest_common.sh@10 -- # set +x 00:11:04.214 ************************************ 00:11:04.214 START TEST version 00:11:04.214 ************************************ 00:11:04.214 12:22:33 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:04.471 * Looking for test storage... 00:11:04.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:04.471 12:22:33 version -- app/version.sh@17 -- # get_header_version major 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # cut -f2 00:11:04.471 12:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:04.471 12:22:33 version -- app/version.sh@17 -- # major=24 00:11:04.471 12:22:33 version -- app/version.sh@18 -- # get_header_version minor 00:11:04.471 12:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # cut -f2 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:04.471 12:22:33 version -- app/version.sh@18 -- # minor=9 00:11:04.471 12:22:33 version -- app/version.sh@19 -- # get_header_version patch 00:11:04.471 12:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # cut -f2 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:04.471 12:22:33 version -- app/version.sh@19 -- # patch=0 00:11:04.471 12:22:33 version -- app/version.sh@20 -- # get_header_version suffix 00:11:04.471 12:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # cut -f2 00:11:04.471 12:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:11:04.471 12:22:33 version -- app/version.sh@20 -- # suffix=-pre 00:11:04.471 12:22:33 version -- app/version.sh@22 -- # version=24.9 00:11:04.471 12:22:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:04.471 12:22:33 version -- app/version.sh@28 -- # version=24.9rc0 00:11:04.471 12:22:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:04.471 12:22:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:04.471 12:22:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:04.471 12:22:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:04.471 00:11:04.471 real 0m0.163s 00:11:04.471 user 0m0.096s 00:11:04.471 sys 0m0.097s 00:11:04.471 12:22:33 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.471 12:22:33 version -- common/autotest_common.sh@10 -- # set +x 00:11:04.471 ************************************ 00:11:04.471 END TEST version 00:11:04.471 ************************************ 00:11:04.471 12:22:33 -- common/autotest_common.sh@1142 -- # return 0 00:11:04.471 12:22:33 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:11:04.471 12:22:33 -- spdk/autotest.sh@198 -- # uname -s 00:11:04.471 12:22:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:11:04.471 12:22:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:11:04.471 12:22:33 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:11:04.471 12:22:33 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:11:04.471 12:22:33 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:04.471 12:22:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.471 12:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.471 12:22:33 -- common/autotest_common.sh@10 -- # set +x 00:11:04.471 ************************************ 00:11:04.471 START TEST spdk_dd 00:11:04.471 ************************************ 00:11:04.471 12:22:33 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:04.471 * Looking for test storage... 00:11:04.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:04.471 12:22:33 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.471 12:22:33 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.471 12:22:33 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.471 12:22:33 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.471 12:22:33 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.471 12:22:33 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.471 12:22:33 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.471 12:22:33 spdk_dd -- paths/export.sh@5 -- # export PATH 00:11:04.471 12:22:33 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.471 12:22:33 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:05.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.039 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.039 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.039 12:22:33 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:11:05.039 12:22:33 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@230 -- # local class 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@232 -- # local progif 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@233 -- # class=01 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@15 -- # local i 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@24 -- # return 0 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@15 -- # local i 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@24 -- # return 0 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:11:05.039 12:22:33 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:05.039 12:22:33 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@139 -- # local lib so 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:33 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:11:05.039 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:11:05.040 * spdk_dd linked to liburing 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:05.040 12:22:34 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:05.040 12:22:34 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:05.041 12:22:34 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:11:05.041 12:22:34 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:11:05.041 12:22:34 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:11:05.041 12:22:34 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:11:05.041 12:22:34 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:11:05.041 12:22:34 spdk_dd -- dd/common.sh@157 -- # return 0 00:11:05.041 12:22:34 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:11:05.041 12:22:34 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:05.041 12:22:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:05.041 12:22:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.041 12:22:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:05.041 ************************************ 00:11:05.041 START TEST spdk_dd_basic_rw 00:11:05.041 ************************************ 00:11:05.041 12:22:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:05.041 * Looking for test storage... 00:11:05.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:11:05.302 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:05.303 ************************************ 00:11:05.303 START TEST dd_bs_lt_native_bs 00:11:05.303 ************************************ 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:05.303 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:05.304 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:05.304 { 00:11:05.304 "subsystems": [ 00:11:05.304 { 00:11:05.304 "subsystem": "bdev", 00:11:05.304 "config": [ 00:11:05.304 { 00:11:05.304 "params": { 00:11:05.304 "trtype": "pcie", 00:11:05.304 "traddr": "0000:00:10.0", 00:11:05.304 "name": "Nvme0" 00:11:05.304 }, 00:11:05.304 "method": "bdev_nvme_attach_controller" 00:11:05.304 }, 00:11:05.304 { 00:11:05.304 "method": "bdev_wait_for_examine" 00:11:05.304 } 00:11:05.304 ] 00:11:05.304 } 00:11:05.304 ] 00:11:05.304 } 00:11:05.304 [2024-07-12 12:22:34.373419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:05.304 [2024-07-12 12:22:34.373536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74674 ] 00:11:05.563 [2024-07-12 12:22:34.517024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.563 [2024-07-12 12:22:34.621895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.821 [2024-07-12 12:22:34.681180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:05.821 [2024-07-12 12:22:34.788753] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:11:05.821 [2024-07-12 12:22:34.788851] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.079 [2024-07-12 12:22:34.911194] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.079 00:11:06.079 real 0m0.668s 00:11:06.079 user 0m0.458s 00:11:06.079 sys 0m0.169s 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.079 ************************************ 00:11:06.079 END TEST dd_bs_lt_native_bs 00:11:06.079 ************************************ 00:11:06.079 12:22:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:06.079 ************************************ 00:11:06.079 START TEST dd_rw 00:11:06.079 ************************************ 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:06.079 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:06.080 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:06.645 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:11:06.645 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:06.645 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:06.645 12:22:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:06.645 [2024-07-12 12:22:35.659530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:06.645 [2024-07-12 12:22:35.659636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74711 ] 00:11:06.645 { 00:11:06.645 "subsystems": [ 00:11:06.645 { 00:11:06.645 "subsystem": "bdev", 00:11:06.645 "config": [ 00:11:06.645 { 00:11:06.645 "params": { 00:11:06.645 "trtype": "pcie", 00:11:06.645 "traddr": "0000:00:10.0", 00:11:06.645 "name": "Nvme0" 00:11:06.645 }, 00:11:06.645 "method": "bdev_nvme_attach_controller" 00:11:06.645 }, 00:11:06.645 { 00:11:06.645 "method": "bdev_wait_for_examine" 00:11:06.645 } 00:11:06.645 ] 00:11:06.645 } 00:11:06.645 ] 00:11:06.645 } 00:11:06.904 [2024-07-12 12:22:35.797881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.904 [2024-07-12 12:22:35.888633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.904 [2024-07-12 12:22:35.942727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:07.163  Copying: 60/60 [kB] (average 29 MBps) 00:11:07.163 00:11:07.163 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:11:07.163 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:07.163 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:07.163 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:07.422 [2024-07-12 12:22:36.289204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:07.422 [2024-07-12 12:22:36.289331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74724 ] 00:11:07.422 { 00:11:07.422 "subsystems": [ 00:11:07.422 { 00:11:07.422 "subsystem": "bdev", 00:11:07.422 "config": [ 00:11:07.422 { 00:11:07.422 "params": { 00:11:07.422 "trtype": "pcie", 00:11:07.422 "traddr": "0000:00:10.0", 00:11:07.422 "name": "Nvme0" 00:11:07.422 }, 00:11:07.422 "method": "bdev_nvme_attach_controller" 00:11:07.422 }, 00:11:07.422 { 00:11:07.422 "method": "bdev_wait_for_examine" 00:11:07.422 } 00:11:07.422 ] 00:11:07.422 } 00:11:07.422 ] 00:11:07.422 } 00:11:07.422 [2024-07-12 12:22:36.428313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.680 [2024-07-12 12:22:36.511071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.680 [2024-07-12 12:22:36.566719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:07.937  Copying: 60/60 [kB] (average 19 MBps) 00:11:07.937 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:07.937 12:22:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:07.938 [2024-07-12 12:22:36.948299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:07.938 [2024-07-12 12:22:36.948435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74740 ] 00:11:07.938 { 00:11:07.938 "subsystems": [ 00:11:07.938 { 00:11:07.938 "subsystem": "bdev", 00:11:07.938 "config": [ 00:11:07.938 { 00:11:07.938 "params": { 00:11:07.938 "trtype": "pcie", 00:11:07.938 "traddr": "0000:00:10.0", 00:11:07.938 "name": "Nvme0" 00:11:07.938 }, 00:11:07.938 "method": "bdev_nvme_attach_controller" 00:11:07.938 }, 00:11:07.938 { 00:11:07.938 "method": "bdev_wait_for_examine" 00:11:07.938 } 00:11:07.938 ] 00:11:07.938 } 00:11:07.938 ] 00:11:07.938 } 00:11:08.195 [2024-07-12 12:22:37.088736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.195 [2024-07-12 12:22:37.183336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.195 [2024-07-12 12:22:37.242561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:08.710  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:08.710 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:08.710 12:22:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:09.276 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:11:09.276 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:09.276 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:09.276 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:09.276 [2024-07-12 12:22:38.165020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:09.276 [2024-07-12 12:22:38.165145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74759 ] 00:11:09.276 { 00:11:09.276 "subsystems": [ 00:11:09.276 { 00:11:09.276 "subsystem": "bdev", 00:11:09.276 "config": [ 00:11:09.276 { 00:11:09.276 "params": { 00:11:09.276 "trtype": "pcie", 00:11:09.276 "traddr": "0000:00:10.0", 00:11:09.276 "name": "Nvme0" 00:11:09.276 }, 00:11:09.276 "method": "bdev_nvme_attach_controller" 00:11:09.276 }, 00:11:09.276 { 00:11:09.276 "method": "bdev_wait_for_examine" 00:11:09.276 } 00:11:09.276 ] 00:11:09.276 } 00:11:09.276 ] 00:11:09.276 } 00:11:09.276 [2024-07-12 12:22:38.299467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.534 [2024-07-12 12:22:38.388198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.534 [2024-07-12 12:22:38.442434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:09.791  Copying: 60/60 [kB] (average 58 MBps) 00:11:09.791 00:11:09.791 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:11:09.791 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:09.791 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:09.791 12:22:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:09.791 [2024-07-12 12:22:38.780954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:09.791 [2024-07-12 12:22:38.781046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74778 ] 00:11:09.791 { 00:11:09.791 "subsystems": [ 00:11:09.791 { 00:11:09.791 "subsystem": "bdev", 00:11:09.791 "config": [ 00:11:09.791 { 00:11:09.791 "params": { 00:11:09.791 "trtype": "pcie", 00:11:09.791 "traddr": "0000:00:10.0", 00:11:09.791 "name": "Nvme0" 00:11:09.791 }, 00:11:09.791 "method": "bdev_nvme_attach_controller" 00:11:09.791 }, 00:11:09.791 { 00:11:09.791 "method": "bdev_wait_for_examine" 00:11:09.791 } 00:11:09.791 ] 00:11:09.791 } 00:11:09.791 ] 00:11:09.791 } 00:11:10.048 [2024-07-12 12:22:38.917036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.048 [2024-07-12 12:22:38.995516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.048 [2024-07-12 12:22:39.048403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:10.305  Copying: 60/60 [kB] (average 58 MBps) 00:11:10.305 00:11:10.305 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:10.305 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:10.306 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:10.306 [2024-07-12 12:22:39.388147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:10.306 [2024-07-12 12:22:39.388228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74793 ] 00:11:10.564 { 00:11:10.564 "subsystems": [ 00:11:10.564 { 00:11:10.564 "subsystem": "bdev", 00:11:10.564 "config": [ 00:11:10.564 { 00:11:10.564 "params": { 00:11:10.564 "trtype": "pcie", 00:11:10.564 "traddr": "0000:00:10.0", 00:11:10.564 "name": "Nvme0" 00:11:10.564 }, 00:11:10.564 "method": "bdev_nvme_attach_controller" 00:11:10.564 }, 00:11:10.564 { 00:11:10.564 "method": "bdev_wait_for_examine" 00:11:10.564 } 00:11:10.564 ] 00:11:10.564 } 00:11:10.564 ] 00:11:10.564 } 00:11:10.564 [2024-07-12 12:22:39.520429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.564 [2024-07-12 12:22:39.597254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.822 [2024-07-12 12:22:39.654237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:11.080  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:11.080 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:11.080 12:22:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:11.647 12:22:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:11:11.647 12:22:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:11.647 12:22:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:11.647 12:22:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:11.647 [2024-07-12 12:22:40.542798] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:11.647 [2024-07-12 12:22:40.542921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74812 ] 00:11:11.647 { 00:11:11.647 "subsystems": [ 00:11:11.647 { 00:11:11.647 "subsystem": "bdev", 00:11:11.647 "config": [ 00:11:11.647 { 00:11:11.647 "params": { 00:11:11.647 "trtype": "pcie", 00:11:11.647 "traddr": "0000:00:10.0", 00:11:11.647 "name": "Nvme0" 00:11:11.647 }, 00:11:11.647 "method": "bdev_nvme_attach_controller" 00:11:11.647 }, 00:11:11.647 { 00:11:11.647 "method": "bdev_wait_for_examine" 00:11:11.647 } 00:11:11.647 ] 00:11:11.647 } 00:11:11.647 ] 00:11:11.647 } 00:11:11.647 [2024-07-12 12:22:40.676962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.906 [2024-07-12 12:22:40.776938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.906 [2024-07-12 12:22:40.833249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:12.165  Copying: 56/56 [kB] (average 54 MBps) 00:11:12.165 00:11:12.166 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:11:12.166 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:12.166 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:12.166 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:12.166 { 00:11:12.166 "subsystems": [ 00:11:12.166 { 00:11:12.166 "subsystem": "bdev", 00:11:12.166 "config": [ 00:11:12.166 { 00:11:12.166 "params": { 00:11:12.166 "trtype": "pcie", 00:11:12.166 "traddr": "0000:00:10.0", 00:11:12.166 "name": "Nvme0" 00:11:12.166 }, 00:11:12.166 "method": "bdev_nvme_attach_controller" 00:11:12.166 }, 00:11:12.166 { 00:11:12.166 "method": "bdev_wait_for_examine" 00:11:12.166 } 00:11:12.166 ] 00:11:12.166 } 00:11:12.166 ] 00:11:12.166 } 00:11:12.166 [2024-07-12 12:22:41.202504] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:12.166 [2024-07-12 12:22:41.202619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74826 ] 00:11:12.425 [2024-07-12 12:22:41.340440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.425 [2024-07-12 12:22:41.438700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.425 [2024-07-12 12:22:41.493761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:12.943  Copying: 56/56 [kB] (average 27 MBps) 00:11:12.943 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:12.943 12:22:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:12.943 [2024-07-12 12:22:41.861649] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:12.943 [2024-07-12 12:22:41.861777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74847 ] 00:11:12.943 { 00:11:12.943 "subsystems": [ 00:11:12.943 { 00:11:12.943 "subsystem": "bdev", 00:11:12.943 "config": [ 00:11:12.943 { 00:11:12.943 "params": { 00:11:12.943 "trtype": "pcie", 00:11:12.943 "traddr": "0000:00:10.0", 00:11:12.943 "name": "Nvme0" 00:11:12.943 }, 00:11:12.943 "method": "bdev_nvme_attach_controller" 00:11:12.943 }, 00:11:12.943 { 00:11:12.943 "method": "bdev_wait_for_examine" 00:11:12.943 } 00:11:12.943 ] 00:11:12.943 } 00:11:12.943 ] 00:11:12.943 } 00:11:12.943 [2024-07-12 12:22:42.001363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.202 [2024-07-12 12:22:42.101375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.202 [2024-07-12 12:22:42.157934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:13.461  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:13.461 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:13.461 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:14.029 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:11:14.029 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:14.029 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:14.029 12:22:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:14.029 [2024-07-12 12:22:43.035689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:14.029 [2024-07-12 12:22:43.035815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74866 ] 00:11:14.029 { 00:11:14.029 "subsystems": [ 00:11:14.029 { 00:11:14.029 "subsystem": "bdev", 00:11:14.029 "config": [ 00:11:14.029 { 00:11:14.029 "params": { 00:11:14.029 "trtype": "pcie", 00:11:14.029 "traddr": "0000:00:10.0", 00:11:14.029 "name": "Nvme0" 00:11:14.029 }, 00:11:14.029 "method": "bdev_nvme_attach_controller" 00:11:14.029 }, 00:11:14.029 { 00:11:14.029 "method": "bdev_wait_for_examine" 00:11:14.029 } 00:11:14.029 ] 00:11:14.029 } 00:11:14.029 ] 00:11:14.029 } 00:11:14.287 [2024-07-12 12:22:43.175959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.287 [2024-07-12 12:22:43.248553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.287 [2024-07-12 12:22:43.303092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:14.582  Copying: 56/56 [kB] (average 54 MBps) 00:11:14.582 00:11:14.582 12:22:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:11:14.582 12:22:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:14.582 12:22:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:14.582 12:22:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:14.841 [2024-07-12 12:22:43.669035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:14.841 [2024-07-12 12:22:43.669176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74879 ] 00:11:14.841 { 00:11:14.841 "subsystems": [ 00:11:14.841 { 00:11:14.841 "subsystem": "bdev", 00:11:14.841 "config": [ 00:11:14.841 { 00:11:14.841 "params": { 00:11:14.841 "trtype": "pcie", 00:11:14.841 "traddr": "0000:00:10.0", 00:11:14.841 "name": "Nvme0" 00:11:14.841 }, 00:11:14.841 "method": "bdev_nvme_attach_controller" 00:11:14.841 }, 00:11:14.841 { 00:11:14.841 "method": "bdev_wait_for_examine" 00:11:14.841 } 00:11:14.841 ] 00:11:14.841 } 00:11:14.841 ] 00:11:14.841 } 00:11:14.841 [2024-07-12 12:22:43.809677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.841 [2024-07-12 12:22:43.890296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.099 [2024-07-12 12:22:43.944413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:15.358  Copying: 56/56 [kB] (average 54 MBps) 00:11:15.358 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:15.358 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:15.358 [2024-07-12 12:22:44.310747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:15.358 [2024-07-12 12:22:44.310899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74895 ] 00:11:15.358 { 00:11:15.358 "subsystems": [ 00:11:15.358 { 00:11:15.358 "subsystem": "bdev", 00:11:15.358 "config": [ 00:11:15.358 { 00:11:15.358 "params": { 00:11:15.358 "trtype": "pcie", 00:11:15.358 "traddr": "0000:00:10.0", 00:11:15.358 "name": "Nvme0" 00:11:15.358 }, 00:11:15.358 "method": "bdev_nvme_attach_controller" 00:11:15.358 }, 00:11:15.358 { 00:11:15.358 "method": "bdev_wait_for_examine" 00:11:15.358 } 00:11:15.358 ] 00:11:15.358 } 00:11:15.358 ] 00:11:15.358 } 00:11:15.617 [2024-07-12 12:22:44.447327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.617 [2024-07-12 12:22:44.537485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.617 [2024-07-12 12:22:44.598591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:15.875  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:15.875 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:15.875 12:22:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:16.442 12:22:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:11:16.442 12:22:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:16.442 12:22:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:16.442 12:22:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:16.442 [2024-07-12 12:22:45.384305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:16.442 [2024-07-12 12:22:45.384434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74914 ] 00:11:16.442 { 00:11:16.442 "subsystems": [ 00:11:16.442 { 00:11:16.442 "subsystem": "bdev", 00:11:16.442 "config": [ 00:11:16.442 { 00:11:16.442 "params": { 00:11:16.442 "trtype": "pcie", 00:11:16.442 "traddr": "0000:00:10.0", 00:11:16.442 "name": "Nvme0" 00:11:16.442 }, 00:11:16.442 "method": "bdev_nvme_attach_controller" 00:11:16.442 }, 00:11:16.442 { 00:11:16.442 "method": "bdev_wait_for_examine" 00:11:16.442 } 00:11:16.442 ] 00:11:16.442 } 00:11:16.442 ] 00:11:16.442 } 00:11:16.442 [2024-07-12 12:22:45.523529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.701 [2024-07-12 12:22:45.617213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.701 [2024-07-12 12:22:45.673596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:16.959  Copying: 48/48 [kB] (average 46 MBps) 00:11:16.959 00:11:16.959 12:22:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:11:16.959 12:22:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:16.959 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:16.959 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:17.218 [2024-07-12 12:22:46.052297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:17.219 [2024-07-12 12:22:46.052424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74933 ] 00:11:17.219 { 00:11:17.219 "subsystems": [ 00:11:17.219 { 00:11:17.219 "subsystem": "bdev", 00:11:17.219 "config": [ 00:11:17.219 { 00:11:17.219 "params": { 00:11:17.219 "trtype": "pcie", 00:11:17.219 "traddr": "0000:00:10.0", 00:11:17.219 "name": "Nvme0" 00:11:17.219 }, 00:11:17.219 "method": "bdev_nvme_attach_controller" 00:11:17.219 }, 00:11:17.219 { 00:11:17.219 "method": "bdev_wait_for_examine" 00:11:17.219 } 00:11:17.219 ] 00:11:17.219 } 00:11:17.219 ] 00:11:17.219 } 00:11:17.219 [2024-07-12 12:22:46.191034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.219 [2024-07-12 12:22:46.264555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.477 [2024-07-12 12:22:46.321992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:17.735  Copying: 48/48 [kB] (average 46 MBps) 00:11:17.735 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:17.736 12:22:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:17.736 { 00:11:17.736 "subsystems": [ 00:11:17.736 { 00:11:17.736 "subsystem": "bdev", 00:11:17.736 "config": [ 00:11:17.736 { 00:11:17.736 "params": { 00:11:17.736 "trtype": "pcie", 00:11:17.736 "traddr": "0000:00:10.0", 00:11:17.736 "name": "Nvme0" 00:11:17.736 }, 00:11:17.736 "method": "bdev_nvme_attach_controller" 00:11:17.736 }, 00:11:17.736 { 00:11:17.736 "method": "bdev_wait_for_examine" 00:11:17.736 } 00:11:17.736 ] 00:11:17.736 } 00:11:17.736 ] 00:11:17.736 } 00:11:17.736 [2024-07-12 12:22:46.693464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:17.736 [2024-07-12 12:22:46.693544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74943 ] 00:11:17.994 [2024-07-12 12:22:46.826785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.994 [2024-07-12 12:22:46.897246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.994 [2024-07-12 12:22:46.951458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:18.252  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:18.252 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:18.252 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:18.818 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:11:18.818 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:18.818 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:18.818 12:22:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:18.818 [2024-07-12 12:22:47.716328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:18.818 [2024-07-12 12:22:47.716936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74962 ] 00:11:18.818 { 00:11:18.818 "subsystems": [ 00:11:18.818 { 00:11:18.818 "subsystem": "bdev", 00:11:18.818 "config": [ 00:11:18.818 { 00:11:18.818 "params": { 00:11:18.818 "trtype": "pcie", 00:11:18.818 "traddr": "0000:00:10.0", 00:11:18.818 "name": "Nvme0" 00:11:18.818 }, 00:11:18.818 "method": "bdev_nvme_attach_controller" 00:11:18.818 }, 00:11:18.818 { 00:11:18.818 "method": "bdev_wait_for_examine" 00:11:18.818 } 00:11:18.818 ] 00:11:18.818 } 00:11:18.818 ] 00:11:18.818 } 00:11:18.818 [2024-07-12 12:22:47.853333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.077 [2024-07-12 12:22:47.929297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.077 [2024-07-12 12:22:47.985018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:19.335  Copying: 48/48 [kB] (average 46 MBps) 00:11:19.335 00:11:19.335 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:11:19.335 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:19.335 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:19.335 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:19.335 [2024-07-12 12:22:48.349128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:19.335 [2024-07-12 12:22:48.349227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74981 ] 00:11:19.335 { 00:11:19.335 "subsystems": [ 00:11:19.335 { 00:11:19.335 "subsystem": "bdev", 00:11:19.335 "config": [ 00:11:19.335 { 00:11:19.335 "params": { 00:11:19.335 "trtype": "pcie", 00:11:19.335 "traddr": "0000:00:10.0", 00:11:19.335 "name": "Nvme0" 00:11:19.335 }, 00:11:19.335 "method": "bdev_nvme_attach_controller" 00:11:19.335 }, 00:11:19.335 { 00:11:19.335 "method": "bdev_wait_for_examine" 00:11:19.335 } 00:11:19.335 ] 00:11:19.335 } 00:11:19.335 ] 00:11:19.335 } 00:11:19.593 [2024-07-12 12:22:48.488796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.593 [2024-07-12 12:22:48.565229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.593 [2024-07-12 12:22:48.619180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:19.851  Copying: 48/48 [kB] (average 46 MBps) 00:11:19.851 00:11:19.851 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:20.111 12:22:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:20.111 [2024-07-12 12:22:48.992464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:20.111 [2024-07-12 12:22:48.992572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75002 ] 00:11:20.111 { 00:11:20.111 "subsystems": [ 00:11:20.111 { 00:11:20.111 "subsystem": "bdev", 00:11:20.111 "config": [ 00:11:20.111 { 00:11:20.111 "params": { 00:11:20.111 "trtype": "pcie", 00:11:20.111 "traddr": "0000:00:10.0", 00:11:20.111 "name": "Nvme0" 00:11:20.111 }, 00:11:20.111 "method": "bdev_nvme_attach_controller" 00:11:20.111 }, 00:11:20.111 { 00:11:20.111 "method": "bdev_wait_for_examine" 00:11:20.111 } 00:11:20.111 ] 00:11:20.111 } 00:11:20.111 ] 00:11:20.111 } 00:11:20.111 [2024-07-12 12:22:49.132580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.370 [2024-07-12 12:22:49.227892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.370 [2024-07-12 12:22:49.284406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:20.637  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:20.637 00:11:20.637 00:11:20.637 real 0m14.553s 00:11:20.637 user 0m10.543s 00:11:20.637 sys 0m5.530s 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.637 ************************************ 00:11:20.637 END TEST dd_rw 00:11:20.637 ************************************ 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:20.637 ************************************ 00:11:20.637 START TEST dd_rw_offset 00:11:20.637 ************************************ 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=zsva1dzudaeqpamb5jyjsromozridm88duatbidmj79t9m75o2g3eaiw1pgw3gqotxe5nw4oyk77jh404n8xdstxz5e15ticxm8peigntz6lnmsuw3dwdnoh88bxcclfx4psyoclwamfea7at2lu1sye0f32o1192wbiorcckdaq3lzqm0xdt2n58zapx1nlwa321218wvc7andjxv3lcncoihzwgxft5pondbbdu2cbweaqzirgexebf0p20w5ri2pkyh9v8p7w25q8ixu2vtximi21gdz2h095jukky2cxsba36rseg7gxshmd0tdsoqewnrhv1knutg1zu94uxnrpe3wr3vucjg87x1ir6lg5bmwj5ura58q86ks2k0d72su7gqzmhgbs6b074v6eygcmgqq5dwx0nf5eta3h2xcj911p6lnmmhib4wg14acgsfh8pvvxvkxpwzdzn3puheqhn5qd57sgxv1hkasdi5xgw9g1mafirbbw0sm08js0tpqe8aa3cqzwlnwzly8f0esxv48udmeq829a0h3xqgclpi7so5rpay6zoqei3g1tpwsvxywge5cybvkee87xh1d6q1107dychr7gq35zr2tfpqzaspij4y9avisonzl4cgntsh6noy8x39uvfnr0lps4zo57wcqjdb7uie5y9o38spxwtcj0dhnog7znt27hw776luld17a8318hwnxcisjg2a784skqkaemacxlkivo61e160fz5fqzu5fskatblkf6xwlilq7nfml35nzf3fddxlxurhl4aqx0jx1ya0cj3rwaoumdg7ghtwctl1kdxocgsrb7gybkdr60wxm9mhq015xnr3x51lfoncywlqc39ok8ha1ygo1t685c3b2elhk6lzekzl4lyyg3vxmbmsd1kedxiektr4ziy0ni0zj9h4gyrl75zq45ddhopyzz16q50oh6zj8k0izb35f544w9yag7s5ex4xd84pyx46ughyclphf2k8s2os7wozmcy0ken857kk8k0fugxgvpujipetsru05h7rx2zweihlkrbwmejfl156xu52t9qo0hz7s53amf10irkqj8avtcjkpfyl9qo0p90d17823qdambximhxt8i1pwzga7j0pa8ujg24r34ojuy56c1074rnry9wqudobr3tjdedun5yscb2xnd3xoz4gti70rf3v4mipz744qmeddgziwyii9huq52p6y3ul1tqaf7kbhuen2aanf0yu15g2lgh4zfy94ebpw1rony9yupct7780jyphiatto444wr1fasm7uqghv0ti4glhjbtkd2uj6ffxygo6lb1f01dqqgrqoir6w3o2qsydjx07w0zr4sosoxkh2aeaxjwintdlt7v1xclqcxuncf0emmq7daquns8ekau2q0nsw86jybsorejhxlmjdhxws0e5cygzhnon7bb8ceg7grqvxzgmf3f2cgvf18451lja6adqaatfa2vviwpotgay1d5u1sqwoxu6dhrd20c8lltuyk18jze3rqdafslbqlxoqv8b6tubamufgmckfalwpafglsohxxbbzfbm5nh5i3qe1z25sc3tb654qx4kpo93u10udo0w44lim7l497yoi3yxr9aa98gkhsn7bhuldg4xdlg2ilm0cz2gv3o6s2nq4b1ln0ivjqjyhscj60clb60nwrww2u4v278nxs9ybkfybkgzbdkqcxg8rvu771d7rzrofi2uhjsdsn2c8sm4rrv2rwtbcfi2g0sc2tqgj4b6n3hniaw8eoaxplifixw8o1gg8rxb6bmx658kp1xa66n38n7q4waw8sumdiai9kbb2lqqnscdy98d59ebd85bkhmfvbp56aubvnzbdx02rl292q1spquheo07tkrhw53htu8ax197wpzb2w5wa4iemuihgkbcznvxneyb4sext5g63xk41kceiag00v7e0p9p2er4kb6g20cot2r97jwxmywg9tfl3il55vjqtwrzf6fcv1co44hjhd3htryfd4pi1jh6biqfgl1b0ip6fro948a90woi22o9nxciwsb7ge8cvk9g09rgk7wtvdlzn2o23zozyr8xoea5onefhvvj6a8imopmsr1t08t21oty3wkfy7sl2j1mkxf8s7bowlvnlbjy5pcw2x18y5zjyqqziq0jini1hcy01u16h9uasxde2n57hu8ivyvxicc8jysuzjpowjz7upvdt6ot0ve1n6ppegotvvcunydhte60laf7ket2p6yoc5dj7h7dirwf7kc21wng0ywv047q643pq90dfax84r84fe0sgj84g550sjnu1j6d7051piljq9nt9nif49r59fdqkqeox5daci11w7333cdslb0e5ja0hmn8zalubeygylqcmj436b92mzue1rgbrairhkvkhm2qfz9d5turzt5zm2i5bi16676xdo9u3dwi70m2dprodkr4hys9pnpu45ezqqpg5noimtcqarb5c24er0px3rjiaflljqw0nhhyi1y14gcf3x0echrkiv2vofiissx63vjehtx3m33udgnlfwatvcauislunp38e0f0ag22z93uh2basem6b5c9qkbxo34hssulwhhgcmvdt6n4941r9jjgoh9eho0lnt7x4yb0m598ajhu3fv4o6kh3avknnr7z7u2lau5hya0ey8olqsi0n9uu6fgn6efyfeihsa289cxpcownallbpovvp0wqj7ldc6dbo8qsgh2jk19c46plod6ntxdhegfthogw1zecs1i8mg0fpsafyq4d2knu5wo79gasz0l2zrnu0gqlgyfczhairgjojiphalr8iqlvomxt21x4bwo3ff4rcfjhg4w6r8ysnoov7knv21tacaq66qio3uhds5k6oxba23wc93yzqq78j9d5g7wumz0m9auk6b3ljeciccnpao5b4uk9vo90us2dcmu071jpl7ekf4ajfrwpwjlaj1o5aofonmqld5xev9c0ts665s61asucwkcq480689d6w2seuoug16qduzo5vgbkfmbyobr1hrcs6zye7s6yrgl2swe74fdm6bjrg4t1r1feuqd4d6enwaspydrpwp307vy2ahrlkaey0jff203a6c4h45jktatvwge94x7phx8s4uaq2kp1m5zx3zis0kdp68sak5luvqa30p9ap8qutl4vvqcr6pdhz90j70r71dtf0eqvgfxwcpgpxtgbok4e0anacqo8u7fpnlx6bk9c1d2qe0m7xy7w65vaflrlsjgbj8ysy6o7a6t0lapw0rj0gawmnj1m3mwexqssdfmbhr866uc47umy5rq25arm1o73hef9nzqj78o3ssem2c19xuktufj517e8zhybxipx4gi58t50wt9f1pblhm0kqhuft8r5wvu4coq80jjecdg0byh4kd96dngtx9hablenyqwyff1hmw2zgr2k2sdk105yphlv83w6360y5z8k06waece1ig8xo3dkplw3kvzouqq1h451jqok26onqkcgpoamdx3v5tfr4a0vktuujrkalc59szw157bnjghcidk3fdl6lyh3auis94be6481yt830cfshx07viqzqjuox1zrl3mpzm6t3s4w00lvsarecr1g01m9dl41jo5pxrm7128qnt5zotlis4g6s67o0gnphlx3jwjk0aawwd0yj0s3x1xqtb3xmj53a1rfk5zruszk60fvkccg5hvmli6x63t45v7ssbw6p1tku6ed4e5kuf2xgpxnvfw74bspxqecucooutd8ib4fu5zritn5nwxfvg9s7okza6n0yr9pvcl61gkz8fy9q1iyupfyxjaokx5s015d15el5g08s94i5jkzuybx6fcddwzysyz7etqxijcuxkrqpkvnz6uh0s3a3t4wdmtomt1oabllf6sps98lt771udx2st4hsdglrx42zpsbroi81suxsjn5qcvjt3lbx2wcbxi9g1bozcqjlc3j20lbb3n2fq3uenoje87sfef1og67f9h6cbix0gm93czrprak4t60o 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:11:20.637 12:22:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:20.907 [2024-07-12 12:22:49.753138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:20.907 [2024-07-12 12:22:49.753243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75027 ] 00:11:20.907 { 00:11:20.907 "subsystems": [ 00:11:20.907 { 00:11:20.907 "subsystem": "bdev", 00:11:20.907 "config": [ 00:11:20.907 { 00:11:20.907 "params": { 00:11:20.907 "trtype": "pcie", 00:11:20.907 "traddr": "0000:00:10.0", 00:11:20.907 "name": "Nvme0" 00:11:20.907 }, 00:11:20.907 "method": "bdev_nvme_attach_controller" 00:11:20.907 }, 00:11:20.907 { 00:11:20.907 "method": "bdev_wait_for_examine" 00:11:20.907 } 00:11:20.907 ] 00:11:20.907 } 00:11:20.907 ] 00:11:20.907 } 00:11:20.907 [2024-07-12 12:22:49.895362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.165 [2024-07-12 12:22:49.999266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.165 [2024-07-12 12:22:50.058402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.424  Copying: 4096/4096 [B] (average 4000 kBps) 00:11:21.424 00:11:21.424 12:22:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:11:21.424 12:22:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:11:21.424 12:22:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:11:21.424 12:22:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:21.424 [2024-07-12 12:22:50.419686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:21.424 [2024-07-12 12:22:50.419809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75046 ] 00:11:21.424 { 00:11:21.424 "subsystems": [ 00:11:21.424 { 00:11:21.424 "subsystem": "bdev", 00:11:21.424 "config": [ 00:11:21.424 { 00:11:21.424 "params": { 00:11:21.424 "trtype": "pcie", 00:11:21.424 "traddr": "0000:00:10.0", 00:11:21.424 "name": "Nvme0" 00:11:21.424 }, 00:11:21.424 "method": "bdev_nvme_attach_controller" 00:11:21.424 }, 00:11:21.424 { 00:11:21.424 "method": "bdev_wait_for_examine" 00:11:21.424 } 00:11:21.424 ] 00:11:21.424 } 00:11:21.424 ] 00:11:21.424 } 00:11:21.682 [2024-07-12 12:22:50.557991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.682 [2024-07-12 12:22:50.636557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.682 [2024-07-12 12:22:50.692916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.941  Copying: 4096/4096 [B] (average 4000 kBps) 00:11:21.941 00:11:21.941 12:22:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:11:21.942 12:22:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ zsva1dzudaeqpamb5jyjsromozridm88duatbidmj79t9m75o2g3eaiw1pgw3gqotxe5nw4oyk77jh404n8xdstxz5e15ticxm8peigntz6lnmsuw3dwdnoh88bxcclfx4psyoclwamfea7at2lu1sye0f32o1192wbiorcckdaq3lzqm0xdt2n58zapx1nlwa321218wvc7andjxv3lcncoihzwgxft5pondbbdu2cbweaqzirgexebf0p20w5ri2pkyh9v8p7w25q8ixu2vtximi21gdz2h095jukky2cxsba36rseg7gxshmd0tdsoqewnrhv1knutg1zu94uxnrpe3wr3vucjg87x1ir6lg5bmwj5ura58q86ks2k0d72su7gqzmhgbs6b074v6eygcmgqq5dwx0nf5eta3h2xcj911p6lnmmhib4wg14acgsfh8pvvxvkxpwzdzn3puheqhn5qd57sgxv1hkasdi5xgw9g1mafirbbw0sm08js0tpqe8aa3cqzwlnwzly8f0esxv48udmeq829a0h3xqgclpi7so5rpay6zoqei3g1tpwsvxywge5cybvkee87xh1d6q1107dychr7gq35zr2tfpqzaspij4y9avisonzl4cgntsh6noy8x39uvfnr0lps4zo57wcqjdb7uie5y9o38spxwtcj0dhnog7znt27hw776luld17a8318hwnxcisjg2a784skqkaemacxlkivo61e160fz5fqzu5fskatblkf6xwlilq7nfml35nzf3fddxlxurhl4aqx0jx1ya0cj3rwaoumdg7ghtwctl1kdxocgsrb7gybkdr60wxm9mhq015xnr3x51lfoncywlqc39ok8ha1ygo1t685c3b2elhk6lzekzl4lyyg3vxmbmsd1kedxiektr4ziy0ni0zj9h4gyrl75zq45ddhopyzz16q50oh6zj8k0izb35f544w9yag7s5ex4xd84pyx46ughyclphf2k8s2os7wozmcy0ken857kk8k0fugxgvpujipetsru05h7rx2zweihlkrbwmejfl156xu52t9qo0hz7s53amf10irkqj8avtcjkpfyl9qo0p90d17823qdambximhxt8i1pwzga7j0pa8ujg24r34ojuy56c1074rnry9wqudobr3tjdedun5yscb2xnd3xoz4gti70rf3v4mipz744qmeddgziwyii9huq52p6y3ul1tqaf7kbhuen2aanf0yu15g2lgh4zfy94ebpw1rony9yupct7780jyphiatto444wr1fasm7uqghv0ti4glhjbtkd2uj6ffxygo6lb1f01dqqgrqoir6w3o2qsydjx07w0zr4sosoxkh2aeaxjwintdlt7v1xclqcxuncf0emmq7daquns8ekau2q0nsw86jybsorejhxlmjdhxws0e5cygzhnon7bb8ceg7grqvxzgmf3f2cgvf18451lja6adqaatfa2vviwpotgay1d5u1sqwoxu6dhrd20c8lltuyk18jze3rqdafslbqlxoqv8b6tubamufgmckfalwpafglsohxxbbzfbm5nh5i3qe1z25sc3tb654qx4kpo93u10udo0w44lim7l497yoi3yxr9aa98gkhsn7bhuldg4xdlg2ilm0cz2gv3o6s2nq4b1ln0ivjqjyhscj60clb60nwrww2u4v278nxs9ybkfybkgzbdkqcxg8rvu771d7rzrofi2uhjsdsn2c8sm4rrv2rwtbcfi2g0sc2tqgj4b6n3hniaw8eoaxplifixw8o1gg8rxb6bmx658kp1xa66n38n7q4waw8sumdiai9kbb2lqqnscdy98d59ebd85bkhmfvbp56aubvnzbdx02rl292q1spquheo07tkrhw53htu8ax197wpzb2w5wa4iemuihgkbcznvxneyb4sext5g63xk41kceiag00v7e0p9p2er4kb6g20cot2r97jwxmywg9tfl3il55vjqtwrzf6fcv1co44hjhd3htryfd4pi1jh6biqfgl1b0ip6fro948a90woi22o9nxciwsb7ge8cvk9g09rgk7wtvdlzn2o23zozyr8xoea5onefhvvj6a8imopmsr1t08t21oty3wkfy7sl2j1mkxf8s7bowlvnlbjy5pcw2x18y5zjyqqziq0jini1hcy01u16h9uasxde2n57hu8ivyvxicc8jysuzjpowjz7upvdt6ot0ve1n6ppegotvvcunydhte60laf7ket2p6yoc5dj7h7dirwf7kc21wng0ywv047q643pq90dfax84r84fe0sgj84g550sjnu1j6d7051piljq9nt9nif49r59fdqkqeox5daci11w7333cdslb0e5ja0hmn8zalubeygylqcmj436b92mzue1rgbrairhkvkhm2qfz9d5turzt5zm2i5bi16676xdo9u3dwi70m2dprodkr4hys9pnpu45ezqqpg5noimtcqarb5c24er0px3rjiaflljqw0nhhyi1y14gcf3x0echrkiv2vofiissx63vjehtx3m33udgnlfwatvcauislunp38e0f0ag22z93uh2basem6b5c9qkbxo34hssulwhhgcmvdt6n4941r9jjgoh9eho0lnt7x4yb0m598ajhu3fv4o6kh3avknnr7z7u2lau5hya0ey8olqsi0n9uu6fgn6efyfeihsa289cxpcownallbpovvp0wqj7ldc6dbo8qsgh2jk19c46plod6ntxdhegfthogw1zecs1i8mg0fpsafyq4d2knu5wo79gasz0l2zrnu0gqlgyfczhairgjojiphalr8iqlvomxt21x4bwo3ff4rcfjhg4w6r8ysnoov7knv21tacaq66qio3uhds5k6oxba23wc93yzqq78j9d5g7wumz0m9auk6b3ljeciccnpao5b4uk9vo90us2dcmu071jpl7ekf4ajfrwpwjlaj1o5aofonmqld5xev9c0ts665s61asucwkcq480689d6w2seuoug16qduzo5vgbkfmbyobr1hrcs6zye7s6yrgl2swe74fdm6bjrg4t1r1feuqd4d6enwaspydrpwp307vy2ahrlkaey0jff203a6c4h45jktatvwge94x7phx8s4uaq2kp1m5zx3zis0kdp68sak5luvqa30p9ap8qutl4vvqcr6pdhz90j70r71dtf0eqvgfxwcpgpxtgbok4e0anacqo8u7fpnlx6bk9c1d2qe0m7xy7w65vaflrlsjgbj8ysy6o7a6t0lapw0rj0gawmnj1m3mwexqssdfmbhr866uc47umy5rq25arm1o73hef9nzqj78o3ssem2c19xuktufj517e8zhybxipx4gi58t50wt9f1pblhm0kqhuft8r5wvu4coq80jjecdg0byh4kd96dngtx9hablenyqwyff1hmw2zgr2k2sdk105yphlv83w6360y5z8k06waece1ig8xo3dkplw3kvzouqq1h451jqok26onqkcgpoamdx3v5tfr4a0vktuujrkalc59szw157bnjghcidk3fdl6lyh3auis94be6481yt830cfshx07viqzqjuox1zrl3mpzm6t3s4w00lvsarecr1g01m9dl41jo5pxrm7128qnt5zotlis4g6s67o0gnphlx3jwjk0aawwd0yj0s3x1xqtb3xmj53a1rfk5zruszk60fvkccg5hvmli6x63t45v7ssbw6p1tku6ed4e5kuf2xgpxnvfw74bspxqecucooutd8ib4fu5zritn5nwxfvg9s7okza6n0yr9pvcl61gkz8fy9q1iyupfyxjaokx5s015d15el5g08s94i5jkzuybx6fcddwzysyz7etqxijcuxkrqpkvnz6uh0s3a3t4wdmtomt1oabllf6sps98lt771udx2st4hsdglrx42zpsbroi81suxsjn5qcvjt3lbx2wcbxi9g1bozcqjlc3j20lbb3n2fq3uenoje87sfef1og67f9h6cbix0gm93czrprak4t60o == \z\s\v\a\1\d\z\u\d\a\e\q\p\a\m\b\5\j\y\j\s\r\o\m\o\z\r\i\d\m\8\8\d\u\a\t\b\i\d\m\j\7\9\t\9\m\7\5\o\2\g\3\e\a\i\w\1\p\g\w\3\g\q\o\t\x\e\5\n\w\4\o\y\k\7\7\j\h\4\0\4\n\8\x\d\s\t\x\z\5\e\1\5\t\i\c\x\m\8\p\e\i\g\n\t\z\6\l\n\m\s\u\w\3\d\w\d\n\o\h\8\8\b\x\c\c\l\f\x\4\p\s\y\o\c\l\w\a\m\f\e\a\7\a\t\2\l\u\1\s\y\e\0\f\3\2\o\1\1\9\2\w\b\i\o\r\c\c\k\d\a\q\3\l\z\q\m\0\x\d\t\2\n\5\8\z\a\p\x\1\n\l\w\a\3\2\1\2\1\8\w\v\c\7\a\n\d\j\x\v\3\l\c\n\c\o\i\h\z\w\g\x\f\t\5\p\o\n\d\b\b\d\u\2\c\b\w\e\a\q\z\i\r\g\e\x\e\b\f\0\p\2\0\w\5\r\i\2\p\k\y\h\9\v\8\p\7\w\2\5\q\8\i\x\u\2\v\t\x\i\m\i\2\1\g\d\z\2\h\0\9\5\j\u\k\k\y\2\c\x\s\b\a\3\6\r\s\e\g\7\g\x\s\h\m\d\0\t\d\s\o\q\e\w\n\r\h\v\1\k\n\u\t\g\1\z\u\9\4\u\x\n\r\p\e\3\w\r\3\v\u\c\j\g\8\7\x\1\i\r\6\l\g\5\b\m\w\j\5\u\r\a\5\8\q\8\6\k\s\2\k\0\d\7\2\s\u\7\g\q\z\m\h\g\b\s\6\b\0\7\4\v\6\e\y\g\c\m\g\q\q\5\d\w\x\0\n\f\5\e\t\a\3\h\2\x\c\j\9\1\1\p\6\l\n\m\m\h\i\b\4\w\g\1\4\a\c\g\s\f\h\8\p\v\v\x\v\k\x\p\w\z\d\z\n\3\p\u\h\e\q\h\n\5\q\d\5\7\s\g\x\v\1\h\k\a\s\d\i\5\x\g\w\9\g\1\m\a\f\i\r\b\b\w\0\s\m\0\8\j\s\0\t\p\q\e\8\a\a\3\c\q\z\w\l\n\w\z\l\y\8\f\0\e\s\x\v\4\8\u\d\m\e\q\8\2\9\a\0\h\3\x\q\g\c\l\p\i\7\s\o\5\r\p\a\y\6\z\o\q\e\i\3\g\1\t\p\w\s\v\x\y\w\g\e\5\c\y\b\v\k\e\e\8\7\x\h\1\d\6\q\1\1\0\7\d\y\c\h\r\7\g\q\3\5\z\r\2\t\f\p\q\z\a\s\p\i\j\4\y\9\a\v\i\s\o\n\z\l\4\c\g\n\t\s\h\6\n\o\y\8\x\3\9\u\v\f\n\r\0\l\p\s\4\z\o\5\7\w\c\q\j\d\b\7\u\i\e\5\y\9\o\3\8\s\p\x\w\t\c\j\0\d\h\n\o\g\7\z\n\t\2\7\h\w\7\7\6\l\u\l\d\1\7\a\8\3\1\8\h\w\n\x\c\i\s\j\g\2\a\7\8\4\s\k\q\k\a\e\m\a\c\x\l\k\i\v\o\6\1\e\1\6\0\f\z\5\f\q\z\u\5\f\s\k\a\t\b\l\k\f\6\x\w\l\i\l\q\7\n\f\m\l\3\5\n\z\f\3\f\d\d\x\l\x\u\r\h\l\4\a\q\x\0\j\x\1\y\a\0\c\j\3\r\w\a\o\u\m\d\g\7\g\h\t\w\c\t\l\1\k\d\x\o\c\g\s\r\b\7\g\y\b\k\d\r\6\0\w\x\m\9\m\h\q\0\1\5\x\n\r\3\x\5\1\l\f\o\n\c\y\w\l\q\c\3\9\o\k\8\h\a\1\y\g\o\1\t\6\8\5\c\3\b\2\e\l\h\k\6\l\z\e\k\z\l\4\l\y\y\g\3\v\x\m\b\m\s\d\1\k\e\d\x\i\e\k\t\r\4\z\i\y\0\n\i\0\z\j\9\h\4\g\y\r\l\7\5\z\q\4\5\d\d\h\o\p\y\z\z\1\6\q\5\0\o\h\6\z\j\8\k\0\i\z\b\3\5\f\5\4\4\w\9\y\a\g\7\s\5\e\x\4\x\d\8\4\p\y\x\4\6\u\g\h\y\c\l\p\h\f\2\k\8\s\2\o\s\7\w\o\z\m\c\y\0\k\e\n\8\5\7\k\k\8\k\0\f\u\g\x\g\v\p\u\j\i\p\e\t\s\r\u\0\5\h\7\r\x\2\z\w\e\i\h\l\k\r\b\w\m\e\j\f\l\1\5\6\x\u\5\2\t\9\q\o\0\h\z\7\s\5\3\a\m\f\1\0\i\r\k\q\j\8\a\v\t\c\j\k\p\f\y\l\9\q\o\0\p\9\0\d\1\7\8\2\3\q\d\a\m\b\x\i\m\h\x\t\8\i\1\p\w\z\g\a\7\j\0\p\a\8\u\j\g\2\4\r\3\4\o\j\u\y\5\6\c\1\0\7\4\r\n\r\y\9\w\q\u\d\o\b\r\3\t\j\d\e\d\u\n\5\y\s\c\b\2\x\n\d\3\x\o\z\4\g\t\i\7\0\r\f\3\v\4\m\i\p\z\7\4\4\q\m\e\d\d\g\z\i\w\y\i\i\9\h\u\q\5\2\p\6\y\3\u\l\1\t\q\a\f\7\k\b\h\u\e\n\2\a\a\n\f\0\y\u\1\5\g\2\l\g\h\4\z\f\y\9\4\e\b\p\w\1\r\o\n\y\9\y\u\p\c\t\7\7\8\0\j\y\p\h\i\a\t\t\o\4\4\4\w\r\1\f\a\s\m\7\u\q\g\h\v\0\t\i\4\g\l\h\j\b\t\k\d\2\u\j\6\f\f\x\y\g\o\6\l\b\1\f\0\1\d\q\q\g\r\q\o\i\r\6\w\3\o\2\q\s\y\d\j\x\0\7\w\0\z\r\4\s\o\s\o\x\k\h\2\a\e\a\x\j\w\i\n\t\d\l\t\7\v\1\x\c\l\q\c\x\u\n\c\f\0\e\m\m\q\7\d\a\q\u\n\s\8\e\k\a\u\2\q\0\n\s\w\8\6\j\y\b\s\o\r\e\j\h\x\l\m\j\d\h\x\w\s\0\e\5\c\y\g\z\h\n\o\n\7\b\b\8\c\e\g\7\g\r\q\v\x\z\g\m\f\3\f\2\c\g\v\f\1\8\4\5\1\l\j\a\6\a\d\q\a\a\t\f\a\2\v\v\i\w\p\o\t\g\a\y\1\d\5\u\1\s\q\w\o\x\u\6\d\h\r\d\2\0\c\8\l\l\t\u\y\k\1\8\j\z\e\3\r\q\d\a\f\s\l\b\q\l\x\o\q\v\8\b\6\t\u\b\a\m\u\f\g\m\c\k\f\a\l\w\p\a\f\g\l\s\o\h\x\x\b\b\z\f\b\m\5\n\h\5\i\3\q\e\1\z\2\5\s\c\3\t\b\6\5\4\q\x\4\k\p\o\9\3\u\1\0\u\d\o\0\w\4\4\l\i\m\7\l\4\9\7\y\o\i\3\y\x\r\9\a\a\9\8\g\k\h\s\n\7\b\h\u\l\d\g\4\x\d\l\g\2\i\l\m\0\c\z\2\g\v\3\o\6\s\2\n\q\4\b\1\l\n\0\i\v\j\q\j\y\h\s\c\j\6\0\c\l\b\6\0\n\w\r\w\w\2\u\4\v\2\7\8\n\x\s\9\y\b\k\f\y\b\k\g\z\b\d\k\q\c\x\g\8\r\v\u\7\7\1\d\7\r\z\r\o\f\i\2\u\h\j\s\d\s\n\2\c\8\s\m\4\r\r\v\2\r\w\t\b\c\f\i\2\g\0\s\c\2\t\q\g\j\4\b\6\n\3\h\n\i\a\w\8\e\o\a\x\p\l\i\f\i\x\w\8\o\1\g\g\8\r\x\b\6\b\m\x\6\5\8\k\p\1\x\a\6\6\n\3\8\n\7\q\4\w\a\w\8\s\u\m\d\i\a\i\9\k\b\b\2\l\q\q\n\s\c\d\y\9\8\d\5\9\e\b\d\8\5\b\k\h\m\f\v\b\p\5\6\a\u\b\v\n\z\b\d\x\0\2\r\l\2\9\2\q\1\s\p\q\u\h\e\o\0\7\t\k\r\h\w\5\3\h\t\u\8\a\x\1\9\7\w\p\z\b\2\w\5\w\a\4\i\e\m\u\i\h\g\k\b\c\z\n\v\x\n\e\y\b\4\s\e\x\t\5\g\6\3\x\k\4\1\k\c\e\i\a\g\0\0\v\7\e\0\p\9\p\2\e\r\4\k\b\6\g\2\0\c\o\t\2\r\9\7\j\w\x\m\y\w\g\9\t\f\l\3\i\l\5\5\v\j\q\t\w\r\z\f\6\f\c\v\1\c\o\4\4\h\j\h\d\3\h\t\r\y\f\d\4\p\i\1\j\h\6\b\i\q\f\g\l\1\b\0\i\p\6\f\r\o\9\4\8\a\9\0\w\o\i\2\2\o\9\n\x\c\i\w\s\b\7\g\e\8\c\v\k\9\g\0\9\r\g\k\7\w\t\v\d\l\z\n\2\o\2\3\z\o\z\y\r\8\x\o\e\a\5\o\n\e\f\h\v\v\j\6\a\8\i\m\o\p\m\s\r\1\t\0\8\t\2\1\o\t\y\3\w\k\f\y\7\s\l\2\j\1\m\k\x\f\8\s\7\b\o\w\l\v\n\l\b\j\y\5\p\c\w\2\x\1\8\y\5\z\j\y\q\q\z\i\q\0\j\i\n\i\1\h\c\y\0\1\u\1\6\h\9\u\a\s\x\d\e\2\n\5\7\h\u\8\i\v\y\v\x\i\c\c\8\j\y\s\u\z\j\p\o\w\j\z\7\u\p\v\d\t\6\o\t\0\v\e\1\n\6\p\p\e\g\o\t\v\v\c\u\n\y\d\h\t\e\6\0\l\a\f\7\k\e\t\2\p\6\y\o\c\5\d\j\7\h\7\d\i\r\w\f\7\k\c\2\1\w\n\g\0\y\w\v\0\4\7\q\6\4\3\p\q\9\0\d\f\a\x\8\4\r\8\4\f\e\0\s\g\j\8\4\g\5\5\0\s\j\n\u\1\j\6\d\7\0\5\1\p\i\l\j\q\9\n\t\9\n\i\f\4\9\r\5\9\f\d\q\k\q\e\o\x\5\d\a\c\i\1\1\w\7\3\3\3\c\d\s\l\b\0\e\5\j\a\0\h\m\n\8\z\a\l\u\b\e\y\g\y\l\q\c\m\j\4\3\6\b\9\2\m\z\u\e\1\r\g\b\r\a\i\r\h\k\v\k\h\m\2\q\f\z\9\d\5\t\u\r\z\t\5\z\m\2\i\5\b\i\1\6\6\7\6\x\d\o\9\u\3\d\w\i\7\0\m\2\d\p\r\o\d\k\r\4\h\y\s\9\p\n\p\u\4\5\e\z\q\q\p\g\5\n\o\i\m\t\c\q\a\r\b\5\c\2\4\e\r\0\p\x\3\r\j\i\a\f\l\l\j\q\w\0\n\h\h\y\i\1\y\1\4\g\c\f\3\x\0\e\c\h\r\k\i\v\2\v\o\f\i\i\s\s\x\6\3\v\j\e\h\t\x\3\m\3\3\u\d\g\n\l\f\w\a\t\v\c\a\u\i\s\l\u\n\p\3\8\e\0\f\0\a\g\2\2\z\9\3\u\h\2\b\a\s\e\m\6\b\5\c\9\q\k\b\x\o\3\4\h\s\s\u\l\w\h\h\g\c\m\v\d\t\6\n\4\9\4\1\r\9\j\j\g\o\h\9\e\h\o\0\l\n\t\7\x\4\y\b\0\m\5\9\8\a\j\h\u\3\f\v\4\o\6\k\h\3\a\v\k\n\n\r\7\z\7\u\2\l\a\u\5\h\y\a\0\e\y\8\o\l\q\s\i\0\n\9\u\u\6\f\g\n\6\e\f\y\f\e\i\h\s\a\2\8\9\c\x\p\c\o\w\n\a\l\l\b\p\o\v\v\p\0\w\q\j\7\l\d\c\6\d\b\o\8\q\s\g\h\2\j\k\1\9\c\4\6\p\l\o\d\6\n\t\x\d\h\e\g\f\t\h\o\g\w\1\z\e\c\s\1\i\8\m\g\0\f\p\s\a\f\y\q\4\d\2\k\n\u\5\w\o\7\9\g\a\s\z\0\l\2\z\r\n\u\0\g\q\l\g\y\f\c\z\h\a\i\r\g\j\o\j\i\p\h\a\l\r\8\i\q\l\v\o\m\x\t\2\1\x\4\b\w\o\3\f\f\4\r\c\f\j\h\g\4\w\6\r\8\y\s\n\o\o\v\7\k\n\v\2\1\t\a\c\a\q\6\6\q\i\o\3\u\h\d\s\5\k\6\o\x\b\a\2\3\w\c\9\3\y\z\q\q\7\8\j\9\d\5\g\7\w\u\m\z\0\m\9\a\u\k\6\b\3\l\j\e\c\i\c\c\n\p\a\o\5\b\4\u\k\9\v\o\9\0\u\s\2\d\c\m\u\0\7\1\j\p\l\7\e\k\f\4\a\j\f\r\w\p\w\j\l\a\j\1\o\5\a\o\f\o\n\m\q\l\d\5\x\e\v\9\c\0\t\s\6\6\5\s\6\1\a\s\u\c\w\k\c\q\4\8\0\6\8\9\d\6\w\2\s\e\u\o\u\g\1\6\q\d\u\z\o\5\v\g\b\k\f\m\b\y\o\b\r\1\h\r\c\s\6\z\y\e\7\s\6\y\r\g\l\2\s\w\e\7\4\f\d\m\6\b\j\r\g\4\t\1\r\1\f\e\u\q\d\4\d\6\e\n\w\a\s\p\y\d\r\p\w\p\3\0\7\v\y\2\a\h\r\l\k\a\e\y\0\j\f\f\2\0\3\a\6\c\4\h\4\5\j\k\t\a\t\v\w\g\e\9\4\x\7\p\h\x\8\s\4\u\a\q\2\k\p\1\m\5\z\x\3\z\i\s\0\k\d\p\6\8\s\a\k\5\l\u\v\q\a\3\0\p\9\a\p\8\q\u\t\l\4\v\v\q\c\r\6\p\d\h\z\9\0\j\7\0\r\7\1\d\t\f\0\e\q\v\g\f\x\w\c\p\g\p\x\t\g\b\o\k\4\e\0\a\n\a\c\q\o\8\u\7\f\p\n\l\x\6\b\k\9\c\1\d\2\q\e\0\m\7\x\y\7\w\6\5\v\a\f\l\r\l\s\j\g\b\j\8\y\s\y\6\o\7\a\6\t\0\l\a\p\w\0\r\j\0\g\a\w\m\n\j\1\m\3\m\w\e\x\q\s\s\d\f\m\b\h\r\8\6\6\u\c\4\7\u\m\y\5\r\q\2\5\a\r\m\1\o\7\3\h\e\f\9\n\z\q\j\7\8\o\3\s\s\e\m\2\c\1\9\x\u\k\t\u\f\j\5\1\7\e\8\z\h\y\b\x\i\p\x\4\g\i\5\8\t\5\0\w\t\9\f\1\p\b\l\h\m\0\k\q\h\u\f\t\8\r\5\w\v\u\4\c\o\q\8\0\j\j\e\c\d\g\0\b\y\h\4\k\d\9\6\d\n\g\t\x\9\h\a\b\l\e\n\y\q\w\y\f\f\1\h\m\w\2\z\g\r\2\k\2\s\d\k\1\0\5\y\p\h\l\v\8\3\w\6\3\6\0\y\5\z\8\k\0\6\w\a\e\c\e\1\i\g\8\x\o\3\d\k\p\l\w\3\k\v\z\o\u\q\q\1\h\4\5\1\j\q\o\k\2\6\o\n\q\k\c\g\p\o\a\m\d\x\3\v\5\t\f\r\4\a\0\v\k\t\u\u\j\r\k\a\l\c\5\9\s\z\w\1\5\7\b\n\j\g\h\c\i\d\k\3\f\d\l\6\l\y\h\3\a\u\i\s\9\4\b\e\6\4\8\1\y\t\8\3\0\c\f\s\h\x\0\7\v\i\q\z\q\j\u\o\x\1\z\r\l\3\m\p\z\m\6\t\3\s\4\w\0\0\l\v\s\a\r\e\c\r\1\g\0\1\m\9\d\l\4\1\j\o\5\p\x\r\m\7\1\2\8\q\n\t\5\z\o\t\l\i\s\4\g\6\s\6\7\o\0\g\n\p\h\l\x\3\j\w\j\k\0\a\a\w\w\d\0\y\j\0\s\3\x\1\x\q\t\b\3\x\m\j\5\3\a\1\r\f\k\5\z\r\u\s\z\k\6\0\f\v\k\c\c\g\5\h\v\m\l\i\6\x\6\3\t\4\5\v\7\s\s\b\w\6\p\1\t\k\u\6\e\d\4\e\5\k\u\f\2\x\g\p\x\n\v\f\w\7\4\b\s\p\x\q\e\c\u\c\o\o\u\t\d\8\i\b\4\f\u\5\z\r\i\t\n\5\n\w\x\f\v\g\9\s\7\o\k\z\a\6\n\0\y\r\9\p\v\c\l\6\1\g\k\z\8\f\y\9\q\1\i\y\u\p\f\y\x\j\a\o\k\x\5\s\0\1\5\d\1\5\e\l\5\g\0\8\s\9\4\i\5\j\k\z\u\y\b\x\6\f\c\d\d\w\z\y\s\y\z\7\e\t\q\x\i\j\c\u\x\k\r\q\p\k\v\n\z\6\u\h\0\s\3\a\3\t\4\w\d\m\t\o\m\t\1\o\a\b\l\l\f\6\s\p\s\9\8\l\t\7\7\1\u\d\x\2\s\t\4\h\s\d\g\l\r\x\4\2\z\p\s\b\r\o\i\8\1\s\u\x\s\j\n\5\q\c\v\j\t\3\l\b\x\2\w\c\b\x\i\9\g\1\b\o\z\c\q\j\l\c\3\j\2\0\l\b\b\3\n\2\f\q\3\u\e\n\o\j\e\8\7\s\f\e\f\1\o\g\6\7\f\9\h\6\c\b\i\x\0\g\m\9\3\c\z\r\p\r\a\k\4\t\6\0\o ]] 00:11:21.942 00:11:21.942 real 0m1.352s 00:11:21.942 user 0m0.893s 00:11:21.942 sys 0m0.628s 00:11:21.942 12:22:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.942 ************************************ 00:11:21.942 END TEST dd_rw_offset 00:11:21.942 ************************************ 00:11:21.942 12:22:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:22.202 12:22:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:22.202 [2024-07-12 12:22:51.090092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:22.202 [2024-07-12 12:22:51.090172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75081 ] 00:11:22.202 { 00:11:22.202 "subsystems": [ 00:11:22.202 { 00:11:22.202 "subsystem": "bdev", 00:11:22.202 "config": [ 00:11:22.202 { 00:11:22.202 "params": { 00:11:22.202 "trtype": "pcie", 00:11:22.202 "traddr": "0000:00:10.0", 00:11:22.202 "name": "Nvme0" 00:11:22.202 }, 00:11:22.202 "method": "bdev_nvme_attach_controller" 00:11:22.202 }, 00:11:22.202 { 00:11:22.202 "method": "bdev_wait_for_examine" 00:11:22.202 } 00:11:22.202 ] 00:11:22.202 } 00:11:22.202 ] 00:11:22.202 } 00:11:22.202 [2024-07-12 12:22:51.225184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.460 [2024-07-12 12:22:51.307890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.460 [2024-07-12 12:22:51.362521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:22.718  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:22.718 00:11:22.718 12:22:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.718 00:11:22.718 real 0m17.636s 00:11:22.718 user 0m12.456s 00:11:22.718 sys 0m6.795s 00:11:22.718 12:22:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.718 ************************************ 00:11:22.718 END TEST spdk_dd_basic_rw 00:11:22.718 ************************************ 00:11:22.718 12:22:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:22.718 12:22:51 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:22.718 12:22:51 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:22.718 12:22:51 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:22.718 12:22:51 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.718 12:22:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:22.718 ************************************ 00:11:22.718 START TEST spdk_dd_posix 00:11:22.718 ************************************ 00:11:22.718 12:22:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:22.718 * Looking for test storage... 00:11:22.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:22.718 12:22:51 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:11:22.977 * First test run, liburing in use 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:22.977 ************************************ 00:11:22.977 START TEST dd_flag_append 00:11:22.977 ************************************ 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=qc6kbn98yymd2q3ecyaxuo9ghxcn4k79 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=wznifzdutxxhwqeawe6vuzixedcv4hu5 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s qc6kbn98yymd2q3ecyaxuo9ghxcn4k79 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s wznifzdutxxhwqeawe6vuzixedcv4hu5 00:11:22.977 12:22:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:22.977 [2024-07-12 12:22:51.873406] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:22.977 [2024-07-12 12:22:51.873517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:11:22.977 [2024-07-12 12:22:52.010534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.236 [2024-07-12 12:22:52.094303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.236 [2024-07-12 12:22:52.153054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:23.495  Copying: 32/32 [B] (average 31 kBps) 00:11:23.495 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ wznifzdutxxhwqeawe6vuzixedcv4hu5qc6kbn98yymd2q3ecyaxuo9ghxcn4k79 == \w\z\n\i\f\z\d\u\t\x\x\h\w\q\e\a\w\e\6\v\u\z\i\x\e\d\c\v\4\h\u\5\q\c\6\k\b\n\9\8\y\y\m\d\2\q\3\e\c\y\a\x\u\o\9\g\h\x\c\n\4\k\7\9 ]] 00:11:23.495 00:11:23.495 real 0m0.574s 00:11:23.495 user 0m0.306s 00:11:23.495 sys 0m0.282s 00:11:23.495 ************************************ 00:11:23.495 END TEST dd_flag_append 00:11:23.495 ************************************ 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:23.495 ************************************ 00:11:23.495 START TEST dd_flag_directory 00:11:23.495 ************************************ 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:23.495 12:22:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:23.495 [2024-07-12 12:22:52.499292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:23.495 [2024-07-12 12:22:52.499425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75168 ] 00:11:23.753 [2024-07-12 12:22:52.640186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.753 [2024-07-12 12:22:52.748515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.753 [2024-07-12 12:22:52.806894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:24.012 [2024-07-12 12:22:52.841225] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:24.012 [2024-07-12 12:22:52.841296] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:24.012 [2024-07-12 12:22:52.841326] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:24.012 [2024-07-12 12:22:52.961057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:24.012 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:24.321 [2024-07-12 12:22:53.112893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:24.321 [2024-07-12 12:22:53.113012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75183 ] 00:11:24.321 [2024-07-12 12:22:53.252069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.321 [2024-07-12 12:22:53.340031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.321 [2024-07-12 12:22:53.394454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:24.579 [2024-07-12 12:22:53.425282] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:24.579 [2024-07-12 12:22:53.425347] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:24.579 [2024-07-12 12:22:53.425377] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:24.579 [2024-07-12 12:22:53.539080] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:24.579 00:11:24.579 real 0m1.177s 00:11:24.579 user 0m0.656s 00:11:24.579 sys 0m0.311s 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.579 ************************************ 00:11:24.579 END TEST dd_flag_directory 00:11:24.579 ************************************ 00:11:24.579 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:24.837 ************************************ 00:11:24.837 START TEST dd_flag_nofollow 00:11:24.837 ************************************ 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:24.837 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:24.838 12:22:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:24.838 [2024-07-12 12:22:53.742713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:24.838 [2024-07-12 12:22:53.742880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75206 ] 00:11:24.838 [2024-07-12 12:22:53.879231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.095 [2024-07-12 12:22:53.978522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.095 [2024-07-12 12:22:54.033645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:25.095 [2024-07-12 12:22:54.066357] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:25.095 [2024-07-12 12:22:54.066411] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:25.095 [2024-07-12 12:22:54.066425] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:25.095 [2024-07-12 12:22:54.178104] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:25.353 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:11:25.353 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.353 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:11:25.353 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:11:25.353 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:11:25.353 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:25.354 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:25.354 [2024-07-12 12:22:54.306667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:25.354 [2024-07-12 12:22:54.306807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75221 ] 00:11:25.612 [2024-07-12 12:22:54.443580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.612 [2024-07-12 12:22:54.520002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.612 [2024-07-12 12:22:54.573347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:25.612 [2024-07-12 12:22:54.601733] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:25.612 [2024-07-12 12:22:54.601829] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:25.612 [2024-07-12 12:22:54.601845] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:25.870 [2024-07-12 12:22:54.709543] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:25.870 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:11:25.870 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:25.870 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:11:25.870 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:11:25.871 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:11:25.871 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:25.871 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:11:25.871 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:11:25.871 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:11:25.871 12:22:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:25.871 [2024-07-12 12:22:54.850496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:25.871 [2024-07-12 12:22:54.850614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75223 ] 00:11:26.129 [2024-07-12 12:22:54.988172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.129 [2024-07-12 12:22:55.078802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.129 [2024-07-12 12:22:55.133190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:26.387  Copying: 512/512 [B] (average 500 kBps) 00:11:26.387 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ depua8r3noxnabt9e2iejr60fzjs520c9l9xgczi9x2hetfa1hx4gk8u9k20m7molk64lw7p42qvvziicei2lscuc1wit6eria5k963n8689h51svk5mfftf6nggktmplod1u09e0yvsch8j3raylt48j92drcm3h4a27a6japcd8hh2wj8rxgdt7nk3b51lyb29fno249p0gdhj8nnn4t5p4trmb5gccivgkf22z68t38uav50ixucxbf5wri2v3lpw9e4llv90c938v6be1shq2b5iu860ilbwomvceotmvv6lk148mpi5q5mb7mcwfw5a17b6pbhut4wu0tq6ycw28lfixevrsp6m66ciykjtxh3n32w0l6wr5sn4ka4whoqoajtrncimyfkvzz2b1c0nv2zsgy1fwfnv76ywcvqmgv18kj5s6jhep6k72bhpjkb0irg5au3gntcqmoprk6qd5b2zsuq3ihlpudojkp0x7pvi7hrorn6290g62t6b == \d\e\p\u\a\8\r\3\n\o\x\n\a\b\t\9\e\2\i\e\j\r\6\0\f\z\j\s\5\2\0\c\9\l\9\x\g\c\z\i\9\x\2\h\e\t\f\a\1\h\x\4\g\k\8\u\9\k\2\0\m\7\m\o\l\k\6\4\l\w\7\p\4\2\q\v\v\z\i\i\c\e\i\2\l\s\c\u\c\1\w\i\t\6\e\r\i\a\5\k\9\6\3\n\8\6\8\9\h\5\1\s\v\k\5\m\f\f\t\f\6\n\g\g\k\t\m\p\l\o\d\1\u\0\9\e\0\y\v\s\c\h\8\j\3\r\a\y\l\t\4\8\j\9\2\d\r\c\m\3\h\4\a\2\7\a\6\j\a\p\c\d\8\h\h\2\w\j\8\r\x\g\d\t\7\n\k\3\b\5\1\l\y\b\2\9\f\n\o\2\4\9\p\0\g\d\h\j\8\n\n\n\4\t\5\p\4\t\r\m\b\5\g\c\c\i\v\g\k\f\2\2\z\6\8\t\3\8\u\a\v\5\0\i\x\u\c\x\b\f\5\w\r\i\2\v\3\l\p\w\9\e\4\l\l\v\9\0\c\9\3\8\v\6\b\e\1\s\h\q\2\b\5\i\u\8\6\0\i\l\b\w\o\m\v\c\e\o\t\m\v\v\6\l\k\1\4\8\m\p\i\5\q\5\m\b\7\m\c\w\f\w\5\a\1\7\b\6\p\b\h\u\t\4\w\u\0\t\q\6\y\c\w\2\8\l\f\i\x\e\v\r\s\p\6\m\6\6\c\i\y\k\j\t\x\h\3\n\3\2\w\0\l\6\w\r\5\s\n\4\k\a\4\w\h\o\q\o\a\j\t\r\n\c\i\m\y\f\k\v\z\z\2\b\1\c\0\n\v\2\z\s\g\y\1\f\w\f\n\v\7\6\y\w\c\v\q\m\g\v\1\8\k\j\5\s\6\j\h\e\p\6\k\7\2\b\h\p\j\k\b\0\i\r\g\5\a\u\3\g\n\t\c\q\m\o\p\r\k\6\q\d\5\b\2\z\s\u\q\3\i\h\l\p\u\d\o\j\k\p\0\x\7\p\v\i\7\h\r\o\r\n\6\2\9\0\g\6\2\t\6\b ]] 00:11:26.387 00:11:26.387 real 0m1.677s 00:11:26.387 user 0m0.909s 00:11:26.387 sys 0m0.570s 00:11:26.387 ************************************ 00:11:26.387 END TEST dd_flag_nofollow 00:11:26.387 ************************************ 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:26.387 ************************************ 00:11:26.387 START TEST dd_flag_noatime 00:11:26.387 ************************************ 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720786975 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720786975 00:11:26.387 12:22:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:11:27.786 12:22:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:27.786 [2024-07-12 12:22:56.481234] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:27.786 [2024-07-12 12:22:56.481371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75271 ] 00:11:27.786 [2024-07-12 12:22:56.622777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.786 [2024-07-12 12:22:56.710063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.786 [2024-07-12 12:22:56.767199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:28.046  Copying: 512/512 [B] (average 500 kBps) 00:11:28.046 00:11:28.046 12:22:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:28.046 12:22:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720786975 )) 00:11:28.046 12:22:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:28.046 12:22:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720786975 )) 00:11:28.046 12:22:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:28.046 [2024-07-12 12:22:57.053558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:28.046 [2024-07-12 12:22:57.053690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75279 ] 00:11:28.305 [2024-07-12 12:22:57.193284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.305 [2024-07-12 12:22:57.294363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.305 [2024-07-12 12:22:57.348507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:28.564  Copying: 512/512 [B] (average 500 kBps) 00:11:28.564 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720786977 )) 00:11:28.564 00:11:28.564 real 0m2.185s 00:11:28.564 user 0m0.633s 00:11:28.564 sys 0m0.588s 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:11:28.564 ************************************ 00:11:28.564 END TEST dd_flag_noatime 00:11:28.564 ************************************ 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:28.564 ************************************ 00:11:28.564 START TEST dd_flags_misc 00:11:28.564 ************************************ 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:11:28.564 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:28.836 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:28.836 12:22:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:28.836 [2024-07-12 12:22:57.701759] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:28.836 [2024-07-12 12:22:57.701883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75313 ] 00:11:28.836 [2024-07-12 12:22:57.839518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.108 [2024-07-12 12:22:57.942631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.108 [2024-07-12 12:22:58.001532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:29.366  Copying: 512/512 [B] (average 500 kBps) 00:11:29.366 00:11:29.367 12:22:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ux6xhbctilzacjsfk1sr2muq74dtushbkkfy71jyd33zn0hcj84vvm57olybbsvg7k17klrpo6t7bw8v0wokahbfgt9drp550geg5d2zp7i70cwazqvpq6548h4sgw149drvtyrrecrr237c6wqyclzumtp44ea6v8guq6mpvh18968fmgfq2bcji1st75a87jh5dhmrz66bm92rq6rzwk8z4rivhpqb7t42jtj5hzry5q7v3igdi749dm2kwxcyefu22acrxpxz5dadfhfhmah2io9r4z87ez1ygh0l8jcwm805gdg3mi0xbzizc27ntjmpoc5fw6wc2qu3itkj2z41jgks5ir677ogka5kqi1isioim9q5ruggvvugm6cpckrx0wf30uj6ejfd4sx2vozxbmp5f00ivg5rdvve2jb39u8plk18g4z8hicf371chfp4i0urrdunhtcxdga30klzcalakuzcy2gn1lezqt7m6x5r3dcuelc315kszvj3 == \u\x\6\x\h\b\c\t\i\l\z\a\c\j\s\f\k\1\s\r\2\m\u\q\7\4\d\t\u\s\h\b\k\k\f\y\7\1\j\y\d\3\3\z\n\0\h\c\j\8\4\v\v\m\5\7\o\l\y\b\b\s\v\g\7\k\1\7\k\l\r\p\o\6\t\7\b\w\8\v\0\w\o\k\a\h\b\f\g\t\9\d\r\p\5\5\0\g\e\g\5\d\2\z\p\7\i\7\0\c\w\a\z\q\v\p\q\6\5\4\8\h\4\s\g\w\1\4\9\d\r\v\t\y\r\r\e\c\r\r\2\3\7\c\6\w\q\y\c\l\z\u\m\t\p\4\4\e\a\6\v\8\g\u\q\6\m\p\v\h\1\8\9\6\8\f\m\g\f\q\2\b\c\j\i\1\s\t\7\5\a\8\7\j\h\5\d\h\m\r\z\6\6\b\m\9\2\r\q\6\r\z\w\k\8\z\4\r\i\v\h\p\q\b\7\t\4\2\j\t\j\5\h\z\r\y\5\q\7\v\3\i\g\d\i\7\4\9\d\m\2\k\w\x\c\y\e\f\u\2\2\a\c\r\x\p\x\z\5\d\a\d\f\h\f\h\m\a\h\2\i\o\9\r\4\z\8\7\e\z\1\y\g\h\0\l\8\j\c\w\m\8\0\5\g\d\g\3\m\i\0\x\b\z\i\z\c\2\7\n\t\j\m\p\o\c\5\f\w\6\w\c\2\q\u\3\i\t\k\j\2\z\4\1\j\g\k\s\5\i\r\6\7\7\o\g\k\a\5\k\q\i\1\i\s\i\o\i\m\9\q\5\r\u\g\g\v\v\u\g\m\6\c\p\c\k\r\x\0\w\f\3\0\u\j\6\e\j\f\d\4\s\x\2\v\o\z\x\b\m\p\5\f\0\0\i\v\g\5\r\d\v\v\e\2\j\b\3\9\u\8\p\l\k\1\8\g\4\z\8\h\i\c\f\3\7\1\c\h\f\p\4\i\0\u\r\r\d\u\n\h\t\c\x\d\g\a\3\0\k\l\z\c\a\l\a\k\u\z\c\y\2\g\n\1\l\e\z\q\t\7\m\6\x\5\r\3\d\c\u\e\l\c\3\1\5\k\s\z\v\j\3 ]] 00:11:29.367 12:22:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:29.367 12:22:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:29.367 [2024-07-12 12:22:58.309742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:29.367 [2024-07-12 12:22:58.309869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75323 ] 00:11:29.624 [2024-07-12 12:22:58.451290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.624 [2024-07-12 12:22:58.552461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.624 [2024-07-12 12:22:58.613505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:29.881  Copying: 512/512 [B] (average 500 kBps) 00:11:29.881 00:11:29.881 12:22:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ux6xhbctilzacjsfk1sr2muq74dtushbkkfy71jyd33zn0hcj84vvm57olybbsvg7k17klrpo6t7bw8v0wokahbfgt9drp550geg5d2zp7i70cwazqvpq6548h4sgw149drvtyrrecrr237c6wqyclzumtp44ea6v8guq6mpvh18968fmgfq2bcji1st75a87jh5dhmrz66bm92rq6rzwk8z4rivhpqb7t42jtj5hzry5q7v3igdi749dm2kwxcyefu22acrxpxz5dadfhfhmah2io9r4z87ez1ygh0l8jcwm805gdg3mi0xbzizc27ntjmpoc5fw6wc2qu3itkj2z41jgks5ir677ogka5kqi1isioim9q5ruggvvugm6cpckrx0wf30uj6ejfd4sx2vozxbmp5f00ivg5rdvve2jb39u8plk18g4z8hicf371chfp4i0urrdunhtcxdga30klzcalakuzcy2gn1lezqt7m6x5r3dcuelc315kszvj3 == \u\x\6\x\h\b\c\t\i\l\z\a\c\j\s\f\k\1\s\r\2\m\u\q\7\4\d\t\u\s\h\b\k\k\f\y\7\1\j\y\d\3\3\z\n\0\h\c\j\8\4\v\v\m\5\7\o\l\y\b\b\s\v\g\7\k\1\7\k\l\r\p\o\6\t\7\b\w\8\v\0\w\o\k\a\h\b\f\g\t\9\d\r\p\5\5\0\g\e\g\5\d\2\z\p\7\i\7\0\c\w\a\z\q\v\p\q\6\5\4\8\h\4\s\g\w\1\4\9\d\r\v\t\y\r\r\e\c\r\r\2\3\7\c\6\w\q\y\c\l\z\u\m\t\p\4\4\e\a\6\v\8\g\u\q\6\m\p\v\h\1\8\9\6\8\f\m\g\f\q\2\b\c\j\i\1\s\t\7\5\a\8\7\j\h\5\d\h\m\r\z\6\6\b\m\9\2\r\q\6\r\z\w\k\8\z\4\r\i\v\h\p\q\b\7\t\4\2\j\t\j\5\h\z\r\y\5\q\7\v\3\i\g\d\i\7\4\9\d\m\2\k\w\x\c\y\e\f\u\2\2\a\c\r\x\p\x\z\5\d\a\d\f\h\f\h\m\a\h\2\i\o\9\r\4\z\8\7\e\z\1\y\g\h\0\l\8\j\c\w\m\8\0\5\g\d\g\3\m\i\0\x\b\z\i\z\c\2\7\n\t\j\m\p\o\c\5\f\w\6\w\c\2\q\u\3\i\t\k\j\2\z\4\1\j\g\k\s\5\i\r\6\7\7\o\g\k\a\5\k\q\i\1\i\s\i\o\i\m\9\q\5\r\u\g\g\v\v\u\g\m\6\c\p\c\k\r\x\0\w\f\3\0\u\j\6\e\j\f\d\4\s\x\2\v\o\z\x\b\m\p\5\f\0\0\i\v\g\5\r\d\v\v\e\2\j\b\3\9\u\8\p\l\k\1\8\g\4\z\8\h\i\c\f\3\7\1\c\h\f\p\4\i\0\u\r\r\d\u\n\h\t\c\x\d\g\a\3\0\k\l\z\c\a\l\a\k\u\z\c\y\2\g\n\1\l\e\z\q\t\7\m\6\x\5\r\3\d\c\u\e\l\c\3\1\5\k\s\z\v\j\3 ]] 00:11:29.881 12:22:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:29.881 12:22:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:29.881 [2024-07-12 12:22:58.909295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:29.881 [2024-07-12 12:22:58.909422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75332 ] 00:11:30.139 [2024-07-12 12:22:59.048359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.139 [2024-07-12 12:22:59.153451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.139 [2024-07-12 12:22:59.207570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:30.397  Copying: 512/512 [B] (average 125 kBps) 00:11:30.397 00:11:30.397 12:22:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ux6xhbctilzacjsfk1sr2muq74dtushbkkfy71jyd33zn0hcj84vvm57olybbsvg7k17klrpo6t7bw8v0wokahbfgt9drp550geg5d2zp7i70cwazqvpq6548h4sgw149drvtyrrecrr237c6wqyclzumtp44ea6v8guq6mpvh18968fmgfq2bcji1st75a87jh5dhmrz66bm92rq6rzwk8z4rivhpqb7t42jtj5hzry5q7v3igdi749dm2kwxcyefu22acrxpxz5dadfhfhmah2io9r4z87ez1ygh0l8jcwm805gdg3mi0xbzizc27ntjmpoc5fw6wc2qu3itkj2z41jgks5ir677ogka5kqi1isioim9q5ruggvvugm6cpckrx0wf30uj6ejfd4sx2vozxbmp5f00ivg5rdvve2jb39u8plk18g4z8hicf371chfp4i0urrdunhtcxdga30klzcalakuzcy2gn1lezqt7m6x5r3dcuelc315kszvj3 == \u\x\6\x\h\b\c\t\i\l\z\a\c\j\s\f\k\1\s\r\2\m\u\q\7\4\d\t\u\s\h\b\k\k\f\y\7\1\j\y\d\3\3\z\n\0\h\c\j\8\4\v\v\m\5\7\o\l\y\b\b\s\v\g\7\k\1\7\k\l\r\p\o\6\t\7\b\w\8\v\0\w\o\k\a\h\b\f\g\t\9\d\r\p\5\5\0\g\e\g\5\d\2\z\p\7\i\7\0\c\w\a\z\q\v\p\q\6\5\4\8\h\4\s\g\w\1\4\9\d\r\v\t\y\r\r\e\c\r\r\2\3\7\c\6\w\q\y\c\l\z\u\m\t\p\4\4\e\a\6\v\8\g\u\q\6\m\p\v\h\1\8\9\6\8\f\m\g\f\q\2\b\c\j\i\1\s\t\7\5\a\8\7\j\h\5\d\h\m\r\z\6\6\b\m\9\2\r\q\6\r\z\w\k\8\z\4\r\i\v\h\p\q\b\7\t\4\2\j\t\j\5\h\z\r\y\5\q\7\v\3\i\g\d\i\7\4\9\d\m\2\k\w\x\c\y\e\f\u\2\2\a\c\r\x\p\x\z\5\d\a\d\f\h\f\h\m\a\h\2\i\o\9\r\4\z\8\7\e\z\1\y\g\h\0\l\8\j\c\w\m\8\0\5\g\d\g\3\m\i\0\x\b\z\i\z\c\2\7\n\t\j\m\p\o\c\5\f\w\6\w\c\2\q\u\3\i\t\k\j\2\z\4\1\j\g\k\s\5\i\r\6\7\7\o\g\k\a\5\k\q\i\1\i\s\i\o\i\m\9\q\5\r\u\g\g\v\v\u\g\m\6\c\p\c\k\r\x\0\w\f\3\0\u\j\6\e\j\f\d\4\s\x\2\v\o\z\x\b\m\p\5\f\0\0\i\v\g\5\r\d\v\v\e\2\j\b\3\9\u\8\p\l\k\1\8\g\4\z\8\h\i\c\f\3\7\1\c\h\f\p\4\i\0\u\r\r\d\u\n\h\t\c\x\d\g\a\3\0\k\l\z\c\a\l\a\k\u\z\c\y\2\g\n\1\l\e\z\q\t\7\m\6\x\5\r\3\d\c\u\e\l\c\3\1\5\k\s\z\v\j\3 ]] 00:11:30.397 12:22:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:30.397 12:22:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:30.655 [2024-07-12 12:22:59.507086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:30.655 [2024-07-12 12:22:59.507211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75347 ] 00:11:30.655 [2024-07-12 12:22:59.643462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.655 [2024-07-12 12:22:59.727265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.913 [2024-07-12 12:22:59.786524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:31.172  Copying: 512/512 [B] (average 250 kBps) 00:11:31.172 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ux6xhbctilzacjsfk1sr2muq74dtushbkkfy71jyd33zn0hcj84vvm57olybbsvg7k17klrpo6t7bw8v0wokahbfgt9drp550geg5d2zp7i70cwazqvpq6548h4sgw149drvtyrrecrr237c6wqyclzumtp44ea6v8guq6mpvh18968fmgfq2bcji1st75a87jh5dhmrz66bm92rq6rzwk8z4rivhpqb7t42jtj5hzry5q7v3igdi749dm2kwxcyefu22acrxpxz5dadfhfhmah2io9r4z87ez1ygh0l8jcwm805gdg3mi0xbzizc27ntjmpoc5fw6wc2qu3itkj2z41jgks5ir677ogka5kqi1isioim9q5ruggvvugm6cpckrx0wf30uj6ejfd4sx2vozxbmp5f00ivg5rdvve2jb39u8plk18g4z8hicf371chfp4i0urrdunhtcxdga30klzcalakuzcy2gn1lezqt7m6x5r3dcuelc315kszvj3 == \u\x\6\x\h\b\c\t\i\l\z\a\c\j\s\f\k\1\s\r\2\m\u\q\7\4\d\t\u\s\h\b\k\k\f\y\7\1\j\y\d\3\3\z\n\0\h\c\j\8\4\v\v\m\5\7\o\l\y\b\b\s\v\g\7\k\1\7\k\l\r\p\o\6\t\7\b\w\8\v\0\w\o\k\a\h\b\f\g\t\9\d\r\p\5\5\0\g\e\g\5\d\2\z\p\7\i\7\0\c\w\a\z\q\v\p\q\6\5\4\8\h\4\s\g\w\1\4\9\d\r\v\t\y\r\r\e\c\r\r\2\3\7\c\6\w\q\y\c\l\z\u\m\t\p\4\4\e\a\6\v\8\g\u\q\6\m\p\v\h\1\8\9\6\8\f\m\g\f\q\2\b\c\j\i\1\s\t\7\5\a\8\7\j\h\5\d\h\m\r\z\6\6\b\m\9\2\r\q\6\r\z\w\k\8\z\4\r\i\v\h\p\q\b\7\t\4\2\j\t\j\5\h\z\r\y\5\q\7\v\3\i\g\d\i\7\4\9\d\m\2\k\w\x\c\y\e\f\u\2\2\a\c\r\x\p\x\z\5\d\a\d\f\h\f\h\m\a\h\2\i\o\9\r\4\z\8\7\e\z\1\y\g\h\0\l\8\j\c\w\m\8\0\5\g\d\g\3\m\i\0\x\b\z\i\z\c\2\7\n\t\j\m\p\o\c\5\f\w\6\w\c\2\q\u\3\i\t\k\j\2\z\4\1\j\g\k\s\5\i\r\6\7\7\o\g\k\a\5\k\q\i\1\i\s\i\o\i\m\9\q\5\r\u\g\g\v\v\u\g\m\6\c\p\c\k\r\x\0\w\f\3\0\u\j\6\e\j\f\d\4\s\x\2\v\o\z\x\b\m\p\5\f\0\0\i\v\g\5\r\d\v\v\e\2\j\b\3\9\u\8\p\l\k\1\8\g\4\z\8\h\i\c\f\3\7\1\c\h\f\p\4\i\0\u\r\r\d\u\n\h\t\c\x\d\g\a\3\0\k\l\z\c\a\l\a\k\u\z\c\y\2\g\n\1\l\e\z\q\t\7\m\6\x\5\r\3\d\c\u\e\l\c\3\1\5\k\s\z\v\j\3 ]] 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:31.172 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:31.172 [2024-07-12 12:23:00.095509] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:31.172 [2024-07-12 12:23:00.095602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75351 ] 00:11:31.172 [2024-07-12 12:23:00.233240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.430 [2024-07-12 12:23:00.330224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.430 [2024-07-12 12:23:00.384437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:31.689  Copying: 512/512 [B] (average 500 kBps) 00:11:31.689 00:11:31.689 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f7t4z4h6komwhjq2xzb595e1lppm0ao0uoqraisaxgd1x6ifsyt1tz6wglw59psntvikun0hg19p0gehx375zevx4683hdhr8f43qi62lxonovq8e8xpf8djlpn9plkdszlmklgof3njddfgwrohmoyae1h9kfsdn00yotf7nmgtrehbeh6ramq7cf8qgql3ze0zbhqj1pszv1rnaxca4j0eiafc7ag5n7m7ou10se30h4aaailwxhxhj2j8mynkl24wwysipna7rpggy5iww99oa1esqkad2yh2xbr2joyske7wjx2syjspciy9hrbnia12v2dagt0hruwsv43qpvvw93mox7zoqo7inmt54cfvh3p5c323cp41lrt20h5eozo2rr9lfco6ahtoicivvgm868p33pmciqd9f3f2jmsrkxlefbi2oxd7aa3xn4gto2dbze4l5kbltsuxmjco0o8sd29uh9rax9ndancf46g40hh3ppr7azkd96bjvsii == \f\7\t\4\z\4\h\6\k\o\m\w\h\j\q\2\x\z\b\5\9\5\e\1\l\p\p\m\0\a\o\0\u\o\q\r\a\i\s\a\x\g\d\1\x\6\i\f\s\y\t\1\t\z\6\w\g\l\w\5\9\p\s\n\t\v\i\k\u\n\0\h\g\1\9\p\0\g\e\h\x\3\7\5\z\e\v\x\4\6\8\3\h\d\h\r\8\f\4\3\q\i\6\2\l\x\o\n\o\v\q\8\e\8\x\p\f\8\d\j\l\p\n\9\p\l\k\d\s\z\l\m\k\l\g\o\f\3\n\j\d\d\f\g\w\r\o\h\m\o\y\a\e\1\h\9\k\f\s\d\n\0\0\y\o\t\f\7\n\m\g\t\r\e\h\b\e\h\6\r\a\m\q\7\c\f\8\q\g\q\l\3\z\e\0\z\b\h\q\j\1\p\s\z\v\1\r\n\a\x\c\a\4\j\0\e\i\a\f\c\7\a\g\5\n\7\m\7\o\u\1\0\s\e\3\0\h\4\a\a\a\i\l\w\x\h\x\h\j\2\j\8\m\y\n\k\l\2\4\w\w\y\s\i\p\n\a\7\r\p\g\g\y\5\i\w\w\9\9\o\a\1\e\s\q\k\a\d\2\y\h\2\x\b\r\2\j\o\y\s\k\e\7\w\j\x\2\s\y\j\s\p\c\i\y\9\h\r\b\n\i\a\1\2\v\2\d\a\g\t\0\h\r\u\w\s\v\4\3\q\p\v\v\w\9\3\m\o\x\7\z\o\q\o\7\i\n\m\t\5\4\c\f\v\h\3\p\5\c\3\2\3\c\p\4\1\l\r\t\2\0\h\5\e\o\z\o\2\r\r\9\l\f\c\o\6\a\h\t\o\i\c\i\v\v\g\m\8\6\8\p\3\3\p\m\c\i\q\d\9\f\3\f\2\j\m\s\r\k\x\l\e\f\b\i\2\o\x\d\7\a\a\3\x\n\4\g\t\o\2\d\b\z\e\4\l\5\k\b\l\t\s\u\x\m\j\c\o\0\o\8\s\d\2\9\u\h\9\r\a\x\9\n\d\a\n\c\f\4\6\g\4\0\h\h\3\p\p\r\7\a\z\k\d\9\6\b\j\v\s\i\i ]] 00:11:31.689 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:31.689 12:23:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:31.689 [2024-07-12 12:23:00.661723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:31.689 [2024-07-12 12:23:00.661872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75366 ] 00:11:31.947 [2024-07-12 12:23:00.795088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.947 [2024-07-12 12:23:00.861058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.947 [2024-07-12 12:23:00.914317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:32.206  Copying: 512/512 [B] (average 500 kBps) 00:11:32.206 00:11:32.206 12:23:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f7t4z4h6komwhjq2xzb595e1lppm0ao0uoqraisaxgd1x6ifsyt1tz6wglw59psntvikun0hg19p0gehx375zevx4683hdhr8f43qi62lxonovq8e8xpf8djlpn9plkdszlmklgof3njddfgwrohmoyae1h9kfsdn00yotf7nmgtrehbeh6ramq7cf8qgql3ze0zbhqj1pszv1rnaxca4j0eiafc7ag5n7m7ou10se30h4aaailwxhxhj2j8mynkl24wwysipna7rpggy5iww99oa1esqkad2yh2xbr2joyske7wjx2syjspciy9hrbnia12v2dagt0hruwsv43qpvvw93mox7zoqo7inmt54cfvh3p5c323cp41lrt20h5eozo2rr9lfco6ahtoicivvgm868p33pmciqd9f3f2jmsrkxlefbi2oxd7aa3xn4gto2dbze4l5kbltsuxmjco0o8sd29uh9rax9ndancf46g40hh3ppr7azkd96bjvsii == \f\7\t\4\z\4\h\6\k\o\m\w\h\j\q\2\x\z\b\5\9\5\e\1\l\p\p\m\0\a\o\0\u\o\q\r\a\i\s\a\x\g\d\1\x\6\i\f\s\y\t\1\t\z\6\w\g\l\w\5\9\p\s\n\t\v\i\k\u\n\0\h\g\1\9\p\0\g\e\h\x\3\7\5\z\e\v\x\4\6\8\3\h\d\h\r\8\f\4\3\q\i\6\2\l\x\o\n\o\v\q\8\e\8\x\p\f\8\d\j\l\p\n\9\p\l\k\d\s\z\l\m\k\l\g\o\f\3\n\j\d\d\f\g\w\r\o\h\m\o\y\a\e\1\h\9\k\f\s\d\n\0\0\y\o\t\f\7\n\m\g\t\r\e\h\b\e\h\6\r\a\m\q\7\c\f\8\q\g\q\l\3\z\e\0\z\b\h\q\j\1\p\s\z\v\1\r\n\a\x\c\a\4\j\0\e\i\a\f\c\7\a\g\5\n\7\m\7\o\u\1\0\s\e\3\0\h\4\a\a\a\i\l\w\x\h\x\h\j\2\j\8\m\y\n\k\l\2\4\w\w\y\s\i\p\n\a\7\r\p\g\g\y\5\i\w\w\9\9\o\a\1\e\s\q\k\a\d\2\y\h\2\x\b\r\2\j\o\y\s\k\e\7\w\j\x\2\s\y\j\s\p\c\i\y\9\h\r\b\n\i\a\1\2\v\2\d\a\g\t\0\h\r\u\w\s\v\4\3\q\p\v\v\w\9\3\m\o\x\7\z\o\q\o\7\i\n\m\t\5\4\c\f\v\h\3\p\5\c\3\2\3\c\p\4\1\l\r\t\2\0\h\5\e\o\z\o\2\r\r\9\l\f\c\o\6\a\h\t\o\i\c\i\v\v\g\m\8\6\8\p\3\3\p\m\c\i\q\d\9\f\3\f\2\j\m\s\r\k\x\l\e\f\b\i\2\o\x\d\7\a\a\3\x\n\4\g\t\o\2\d\b\z\e\4\l\5\k\b\l\t\s\u\x\m\j\c\o\0\o\8\s\d\2\9\u\h\9\r\a\x\9\n\d\a\n\c\f\4\6\g\4\0\h\h\3\p\p\r\7\a\z\k\d\9\6\b\j\v\s\i\i ]] 00:11:32.206 12:23:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:32.206 12:23:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:32.206 [2024-07-12 12:23:01.202119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:32.206 [2024-07-12 12:23:01.202210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75376 ] 00:11:32.464 [2024-07-12 12:23:01.337961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.464 [2024-07-12 12:23:01.433311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.464 [2024-07-12 12:23:01.492389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:32.722  Copying: 512/512 [B] (average 250 kBps) 00:11:32.722 00:11:32.722 12:23:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f7t4z4h6komwhjq2xzb595e1lppm0ao0uoqraisaxgd1x6ifsyt1tz6wglw59psntvikun0hg19p0gehx375zevx4683hdhr8f43qi62lxonovq8e8xpf8djlpn9plkdszlmklgof3njddfgwrohmoyae1h9kfsdn00yotf7nmgtrehbeh6ramq7cf8qgql3ze0zbhqj1pszv1rnaxca4j0eiafc7ag5n7m7ou10se30h4aaailwxhxhj2j8mynkl24wwysipna7rpggy5iww99oa1esqkad2yh2xbr2joyske7wjx2syjspciy9hrbnia12v2dagt0hruwsv43qpvvw93mox7zoqo7inmt54cfvh3p5c323cp41lrt20h5eozo2rr9lfco6ahtoicivvgm868p33pmciqd9f3f2jmsrkxlefbi2oxd7aa3xn4gto2dbze4l5kbltsuxmjco0o8sd29uh9rax9ndancf46g40hh3ppr7azkd96bjvsii == \f\7\t\4\z\4\h\6\k\o\m\w\h\j\q\2\x\z\b\5\9\5\e\1\l\p\p\m\0\a\o\0\u\o\q\r\a\i\s\a\x\g\d\1\x\6\i\f\s\y\t\1\t\z\6\w\g\l\w\5\9\p\s\n\t\v\i\k\u\n\0\h\g\1\9\p\0\g\e\h\x\3\7\5\z\e\v\x\4\6\8\3\h\d\h\r\8\f\4\3\q\i\6\2\l\x\o\n\o\v\q\8\e\8\x\p\f\8\d\j\l\p\n\9\p\l\k\d\s\z\l\m\k\l\g\o\f\3\n\j\d\d\f\g\w\r\o\h\m\o\y\a\e\1\h\9\k\f\s\d\n\0\0\y\o\t\f\7\n\m\g\t\r\e\h\b\e\h\6\r\a\m\q\7\c\f\8\q\g\q\l\3\z\e\0\z\b\h\q\j\1\p\s\z\v\1\r\n\a\x\c\a\4\j\0\e\i\a\f\c\7\a\g\5\n\7\m\7\o\u\1\0\s\e\3\0\h\4\a\a\a\i\l\w\x\h\x\h\j\2\j\8\m\y\n\k\l\2\4\w\w\y\s\i\p\n\a\7\r\p\g\g\y\5\i\w\w\9\9\o\a\1\e\s\q\k\a\d\2\y\h\2\x\b\r\2\j\o\y\s\k\e\7\w\j\x\2\s\y\j\s\p\c\i\y\9\h\r\b\n\i\a\1\2\v\2\d\a\g\t\0\h\r\u\w\s\v\4\3\q\p\v\v\w\9\3\m\o\x\7\z\o\q\o\7\i\n\m\t\5\4\c\f\v\h\3\p\5\c\3\2\3\c\p\4\1\l\r\t\2\0\h\5\e\o\z\o\2\r\r\9\l\f\c\o\6\a\h\t\o\i\c\i\v\v\g\m\8\6\8\p\3\3\p\m\c\i\q\d\9\f\3\f\2\j\m\s\r\k\x\l\e\f\b\i\2\o\x\d\7\a\a\3\x\n\4\g\t\o\2\d\b\z\e\4\l\5\k\b\l\t\s\u\x\m\j\c\o\0\o\8\s\d\2\9\u\h\9\r\a\x\9\n\d\a\n\c\f\4\6\g\4\0\h\h\3\p\p\r\7\a\z\k\d\9\6\b\j\v\s\i\i ]] 00:11:32.722 12:23:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:32.722 12:23:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:32.722 [2024-07-12 12:23:01.770763] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:32.722 [2024-07-12 12:23:01.770883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75385 ] 00:11:32.980 [2024-07-12 12:23:01.903135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.980 [2024-07-12 12:23:01.995165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.980 [2024-07-12 12:23:02.049567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:33.238  Copying: 512/512 [B] (average 250 kBps) 00:11:33.238 00:11:33.238 12:23:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f7t4z4h6komwhjq2xzb595e1lppm0ao0uoqraisaxgd1x6ifsyt1tz6wglw59psntvikun0hg19p0gehx375zevx4683hdhr8f43qi62lxonovq8e8xpf8djlpn9plkdszlmklgof3njddfgwrohmoyae1h9kfsdn00yotf7nmgtrehbeh6ramq7cf8qgql3ze0zbhqj1pszv1rnaxca4j0eiafc7ag5n7m7ou10se30h4aaailwxhxhj2j8mynkl24wwysipna7rpggy5iww99oa1esqkad2yh2xbr2joyske7wjx2syjspciy9hrbnia12v2dagt0hruwsv43qpvvw93mox7zoqo7inmt54cfvh3p5c323cp41lrt20h5eozo2rr9lfco6ahtoicivvgm868p33pmciqd9f3f2jmsrkxlefbi2oxd7aa3xn4gto2dbze4l5kbltsuxmjco0o8sd29uh9rax9ndancf46g40hh3ppr7azkd96bjvsii == \f\7\t\4\z\4\h\6\k\o\m\w\h\j\q\2\x\z\b\5\9\5\e\1\l\p\p\m\0\a\o\0\u\o\q\r\a\i\s\a\x\g\d\1\x\6\i\f\s\y\t\1\t\z\6\w\g\l\w\5\9\p\s\n\t\v\i\k\u\n\0\h\g\1\9\p\0\g\e\h\x\3\7\5\z\e\v\x\4\6\8\3\h\d\h\r\8\f\4\3\q\i\6\2\l\x\o\n\o\v\q\8\e\8\x\p\f\8\d\j\l\p\n\9\p\l\k\d\s\z\l\m\k\l\g\o\f\3\n\j\d\d\f\g\w\r\o\h\m\o\y\a\e\1\h\9\k\f\s\d\n\0\0\y\o\t\f\7\n\m\g\t\r\e\h\b\e\h\6\r\a\m\q\7\c\f\8\q\g\q\l\3\z\e\0\z\b\h\q\j\1\p\s\z\v\1\r\n\a\x\c\a\4\j\0\e\i\a\f\c\7\a\g\5\n\7\m\7\o\u\1\0\s\e\3\0\h\4\a\a\a\i\l\w\x\h\x\h\j\2\j\8\m\y\n\k\l\2\4\w\w\y\s\i\p\n\a\7\r\p\g\g\y\5\i\w\w\9\9\o\a\1\e\s\q\k\a\d\2\y\h\2\x\b\r\2\j\o\y\s\k\e\7\w\j\x\2\s\y\j\s\p\c\i\y\9\h\r\b\n\i\a\1\2\v\2\d\a\g\t\0\h\r\u\w\s\v\4\3\q\p\v\v\w\9\3\m\o\x\7\z\o\q\o\7\i\n\m\t\5\4\c\f\v\h\3\p\5\c\3\2\3\c\p\4\1\l\r\t\2\0\h\5\e\o\z\o\2\r\r\9\l\f\c\o\6\a\h\t\o\i\c\i\v\v\g\m\8\6\8\p\3\3\p\m\c\i\q\d\9\f\3\f\2\j\m\s\r\k\x\l\e\f\b\i\2\o\x\d\7\a\a\3\x\n\4\g\t\o\2\d\b\z\e\4\l\5\k\b\l\t\s\u\x\m\j\c\o\0\o\8\s\d\2\9\u\h\9\r\a\x\9\n\d\a\n\c\f\4\6\g\4\0\h\h\3\p\p\r\7\a\z\k\d\9\6\b\j\v\s\i\i ]] 00:11:33.238 00:11:33.238 real 0m4.663s 00:11:33.238 user 0m2.588s 00:11:33.238 sys 0m2.271s 00:11:33.238 ************************************ 00:11:33.238 END TEST dd_flags_misc 00:11:33.238 ************************************ 00:11:33.238 12:23:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.238 12:23:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:11:33.497 * Second test run, disabling liburing, forcing AIO 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:33.497 ************************************ 00:11:33.497 START TEST dd_flag_append_forced_aio 00:11:33.497 ************************************ 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=iwyq8xfaf9l5d3p37u0mdtbyk065t3u4 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=qdpvlog075wem47b9juy8gpli4pzq1kg 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s iwyq8xfaf9l5d3p37u0mdtbyk065t3u4 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s qdpvlog075wem47b9juy8gpli4pzq1kg 00:11:33.497 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:33.497 [2024-07-12 12:23:02.425689] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:33.497 [2024-07-12 12:23:02.425824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75414 ] 00:11:33.497 [2024-07-12 12:23:02.563667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.756 [2024-07-12 12:23:02.659975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.756 [2024-07-12 12:23:02.716940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:34.014  Copying: 32/32 [B] (average 31 kBps) 00:11:34.014 00:11:34.014 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ qdpvlog075wem47b9juy8gpli4pzq1kgiwyq8xfaf9l5d3p37u0mdtbyk065t3u4 == \q\d\p\v\l\o\g\0\7\5\w\e\m\4\7\b\9\j\u\y\8\g\p\l\i\4\p\z\q\1\k\g\i\w\y\q\8\x\f\a\f\9\l\5\d\3\p\3\7\u\0\m\d\t\b\y\k\0\6\5\t\3\u\4 ]] 00:11:34.014 00:11:34.014 real 0m0.609s 00:11:34.014 user 0m0.332s 00:11:34.015 sys 0m0.157s 00:11:34.015 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.015 ************************************ 00:11:34.015 END TEST dd_flag_append_forced_aio 00:11:34.015 ************************************ 00:11:34.015 12:23:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:34.015 ************************************ 00:11:34.015 START TEST dd_flag_directory_forced_aio 00:11:34.015 ************************************ 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:34.015 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:34.015 [2024-07-12 12:23:03.082938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:34.015 [2024-07-12 12:23:03.083045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75440 ] 00:11:34.273 [2024-07-12 12:23:03.221381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.273 [2024-07-12 12:23:03.301554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.531 [2024-07-12 12:23:03.360634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:34.531 [2024-07-12 12:23:03.392999] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:34.531 [2024-07-12 12:23:03.393069] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:34.531 [2024-07-12 12:23:03.393099] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:34.531 [2024-07-12 12:23:03.508059] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:34.531 12:23:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:34.790 [2024-07-12 12:23:03.657962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:34.790 [2024-07-12 12:23:03.658138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75454 ] 00:11:34.790 [2024-07-12 12:23:03.794478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.049 [2024-07-12 12:23:03.885756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.049 [2024-07-12 12:23:03.945582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:35.049 [2024-07-12 12:23:03.977558] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:35.049 [2024-07-12 12:23:03.977627] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:35.049 [2024-07-12 12:23:03.977656] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:35.049 [2024-07-12 12:23:04.096581] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:35.308 00:11:35.308 real 0m1.155s 00:11:35.308 user 0m0.634s 00:11:35.308 sys 0m0.310s 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:35.308 ************************************ 00:11:35.308 END TEST dd_flag_directory_forced_aio 00:11:35.308 ************************************ 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:35.308 ************************************ 00:11:35.308 START TEST dd_flag_nofollow_forced_aio 00:11:35.308 ************************************ 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:35.308 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:35.308 [2024-07-12 12:23:04.298763] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:35.308 [2024-07-12 12:23:04.298896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75484 ] 00:11:35.567 [2024-07-12 12:23:04.437211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.567 [2024-07-12 12:23:04.534997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.567 [2024-07-12 12:23:04.589627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:35.567 [2024-07-12 12:23:04.624169] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:35.567 [2024-07-12 12:23:04.624239] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:35.567 [2024-07-12 12:23:04.624271] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:35.826 [2024-07-12 12:23:04.743718] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:35.826 12:23:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:35.826 [2024-07-12 12:23:04.885602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:35.826 [2024-07-12 12:23:04.885700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75493 ] 00:11:36.085 [2024-07-12 12:23:05.014942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.085 [2024-07-12 12:23:05.110567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.085 [2024-07-12 12:23:05.164849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:36.342 [2024-07-12 12:23:05.199866] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:36.342 [2024-07-12 12:23:05.199910] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:36.342 [2024-07-12 12:23:05.199927] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:36.342 [2024-07-12 12:23:05.326071] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:11:36.342 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:11:36.342 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:36.343 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:36.600 12:23:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:36.600 [2024-07-12 12:23:05.487929] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:36.600 [2024-07-12 12:23:05.488091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75501 ] 00:11:36.600 [2024-07-12 12:23:05.631240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.859 [2024-07-12 12:23:05.731316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.859 [2024-07-12 12:23:05.789877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:37.117  Copying: 512/512 [B] (average 500 kBps) 00:11:37.117 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ pm9qqdxt0f0zvcfe5etjyyzqheeoys7266h9gqpmtn8os095t7z4kse4fnv3nsdvo33zarqkdv5wzspesr4ty4cax5runuvparuzyx6pjx1pljbacoizfgdxww6cm0x9op8udf4cpuzltep9e5olt5ylm6rpj6hotwux5jftheggsxdzs5b8pzh5qqhwdzgm7lo8zgeox6dwz810m4s4b6bj9n1dx2tbhe2hhbze775r6p2jgno63o3phbcs6y4b8y9js4gy7yjhv4g7rbwbtux10aa5iun174h6b7g4luu3yi3z2j2b892gwll93i13hgq3trxqnc90a9uwluusn3obsotwnzobpsas4cr8opfb0dk91kluvui8d15s25x321oxr8kti45ohu6zjsor7evgb5ne01wc8r0cttkiipf67u4u1paz1i6f4cul43ayeqdjd5naog9qz9mkl9izrpvpqnim7b16qsmkaogwdi1vwi45pqpbc0cnyp6ndk1n == \p\m\9\q\q\d\x\t\0\f\0\z\v\c\f\e\5\e\t\j\y\y\z\q\h\e\e\o\y\s\7\2\6\6\h\9\g\q\p\m\t\n\8\o\s\0\9\5\t\7\z\4\k\s\e\4\f\n\v\3\n\s\d\v\o\3\3\z\a\r\q\k\d\v\5\w\z\s\p\e\s\r\4\t\y\4\c\a\x\5\r\u\n\u\v\p\a\r\u\z\y\x\6\p\j\x\1\p\l\j\b\a\c\o\i\z\f\g\d\x\w\w\6\c\m\0\x\9\o\p\8\u\d\f\4\c\p\u\z\l\t\e\p\9\e\5\o\l\t\5\y\l\m\6\r\p\j\6\h\o\t\w\u\x\5\j\f\t\h\e\g\g\s\x\d\z\s\5\b\8\p\z\h\5\q\q\h\w\d\z\g\m\7\l\o\8\z\g\e\o\x\6\d\w\z\8\1\0\m\4\s\4\b\6\b\j\9\n\1\d\x\2\t\b\h\e\2\h\h\b\z\e\7\7\5\r\6\p\2\j\g\n\o\6\3\o\3\p\h\b\c\s\6\y\4\b\8\y\9\j\s\4\g\y\7\y\j\h\v\4\g\7\r\b\w\b\t\u\x\1\0\a\a\5\i\u\n\1\7\4\h\6\b\7\g\4\l\u\u\3\y\i\3\z\2\j\2\b\8\9\2\g\w\l\l\9\3\i\1\3\h\g\q\3\t\r\x\q\n\c\9\0\a\9\u\w\l\u\u\s\n\3\o\b\s\o\t\w\n\z\o\b\p\s\a\s\4\c\r\8\o\p\f\b\0\d\k\9\1\k\l\u\v\u\i\8\d\1\5\s\2\5\x\3\2\1\o\x\r\8\k\t\i\4\5\o\h\u\6\z\j\s\o\r\7\e\v\g\b\5\n\e\0\1\w\c\8\r\0\c\t\t\k\i\i\p\f\6\7\u\4\u\1\p\a\z\1\i\6\f\4\c\u\l\4\3\a\y\e\q\d\j\d\5\n\a\o\g\9\q\z\9\m\k\l\9\i\z\r\p\v\p\q\n\i\m\7\b\1\6\q\s\m\k\a\o\g\w\d\i\1\v\w\i\4\5\p\q\p\b\c\0\c\n\y\p\6\n\d\k\1\n ]] 00:11:37.117 00:11:37.117 real 0m1.804s 00:11:37.117 user 0m1.011s 00:11:37.117 sys 0m0.455s 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.117 ************************************ 00:11:37.117 END TEST dd_flag_nofollow_forced_aio 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:37.117 ************************************ 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:37.117 ************************************ 00:11:37.117 START TEST dd_flag_noatime_forced_aio 00:11:37.117 ************************************ 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720786985 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720786986 00:11:37.117 12:23:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:11:38.051 12:23:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:38.309 [2024-07-12 12:23:07.174078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:38.309 [2024-07-12 12:23:07.174187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75541 ] 00:11:38.309 [2024-07-12 12:23:07.311447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.568 [2024-07-12 12:23:07.404546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.568 [2024-07-12 12:23:07.458159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:38.827  Copying: 512/512 [B] (average 500 kBps) 00:11:38.827 00:11:38.827 12:23:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:38.827 12:23:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720786985 )) 00:11:38.827 12:23:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:38.827 12:23:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720786986 )) 00:11:38.827 12:23:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:38.827 [2024-07-12 12:23:07.756348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:38.827 [2024-07-12 12:23:07.756444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75553 ] 00:11:38.827 [2024-07-12 12:23:07.891081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.085 [2024-07-12 12:23:07.960437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.085 [2024-07-12 12:23:08.013419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:39.344  Copying: 512/512 [B] (average 500 kBps) 00:11:39.344 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:39.344 ************************************ 00:11:39.344 END TEST dd_flag_noatime_forced_aio 00:11:39.344 ************************************ 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720786988 )) 00:11:39.344 00:11:39.344 real 0m2.166s 00:11:39.344 user 0m0.616s 00:11:39.344 sys 0m0.306s 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.344 12:23:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:39.344 ************************************ 00:11:39.344 START TEST dd_flags_misc_forced_aio 00:11:39.344 ************************************ 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:39.345 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:39.345 [2024-07-12 12:23:08.364122] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:39.345 [2024-07-12 12:23:08.364223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75585 ] 00:11:39.603 [2024-07-12 12:23:08.493855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.603 [2024-07-12 12:23:08.571877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.603 [2024-07-12 12:23:08.625714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:39.860  Copying: 512/512 [B] (average 500 kBps) 00:11:39.860 00:11:39.861 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ embivzk3cyfocv7lunxvuf7aa9lk30mp4rv47jojjdxnnfni377v5ekhe7cihlwdkyyrrf2on5tsjreqrbutbzh9i24tmzlemcqvog51zxatax4oqxisrw62k0hklsyxllddras7q191ylxqjmbsef3hibsnzqhuuje4po8ty4oe749893plo3pn77t8eli88cg6g31mbhltf31sxwp873lori0vkkaweblgihzg0a00y2qss3ztd9woo2lh9eqh18o3suz58i7ihjt1ziq3asya14fc7gx5ieb3dyvxxzw9eoiiu7uyn0ydhgq1ww7hrp8eyqsb15ghs93704mm6vggdizv28588qzllmeei13xr6395y37opb0ey13upsb4jrj2btpvvkpb8sfa70qrbjsbi09i4kr55fm0ztgo4oob4pszea76vp4yoozk19vdj7ybnu24hh0gav2d2ma6zowyakmu1g8z0b63ztbuthwtjrhj8p8ddldklnis0ln == \e\m\b\i\v\z\k\3\c\y\f\o\c\v\7\l\u\n\x\v\u\f\7\a\a\9\l\k\3\0\m\p\4\r\v\4\7\j\o\j\j\d\x\n\n\f\n\i\3\7\7\v\5\e\k\h\e\7\c\i\h\l\w\d\k\y\y\r\r\f\2\o\n\5\t\s\j\r\e\q\r\b\u\t\b\z\h\9\i\2\4\t\m\z\l\e\m\c\q\v\o\g\5\1\z\x\a\t\a\x\4\o\q\x\i\s\r\w\6\2\k\0\h\k\l\s\y\x\l\l\d\d\r\a\s\7\q\1\9\1\y\l\x\q\j\m\b\s\e\f\3\h\i\b\s\n\z\q\h\u\u\j\e\4\p\o\8\t\y\4\o\e\7\4\9\8\9\3\p\l\o\3\p\n\7\7\t\8\e\l\i\8\8\c\g\6\g\3\1\m\b\h\l\t\f\3\1\s\x\w\p\8\7\3\l\o\r\i\0\v\k\k\a\w\e\b\l\g\i\h\z\g\0\a\0\0\y\2\q\s\s\3\z\t\d\9\w\o\o\2\l\h\9\e\q\h\1\8\o\3\s\u\z\5\8\i\7\i\h\j\t\1\z\i\q\3\a\s\y\a\1\4\f\c\7\g\x\5\i\e\b\3\d\y\v\x\x\z\w\9\e\o\i\i\u\7\u\y\n\0\y\d\h\g\q\1\w\w\7\h\r\p\8\e\y\q\s\b\1\5\g\h\s\9\3\7\0\4\m\m\6\v\g\g\d\i\z\v\2\8\5\8\8\q\z\l\l\m\e\e\i\1\3\x\r\6\3\9\5\y\3\7\o\p\b\0\e\y\1\3\u\p\s\b\4\j\r\j\2\b\t\p\v\v\k\p\b\8\s\f\a\7\0\q\r\b\j\s\b\i\0\9\i\4\k\r\5\5\f\m\0\z\t\g\o\4\o\o\b\4\p\s\z\e\a\7\6\v\p\4\y\o\o\z\k\1\9\v\d\j\7\y\b\n\u\2\4\h\h\0\g\a\v\2\d\2\m\a\6\z\o\w\y\a\k\m\u\1\g\8\z\0\b\6\3\z\t\b\u\t\h\w\t\j\r\h\j\8\p\8\d\d\l\d\k\l\n\i\s\0\l\n ]] 00:11:39.861 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:39.861 12:23:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:39.861 [2024-07-12 12:23:08.934845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:39.861 [2024-07-12 12:23:08.934946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75591 ] 00:11:40.118 [2024-07-12 12:23:09.070659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.118 [2024-07-12 12:23:09.148031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.377 [2024-07-12 12:23:09.202808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:40.377  Copying: 512/512 [B] (average 500 kBps) 00:11:40.377 00:11:40.377 12:23:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ embivzk3cyfocv7lunxvuf7aa9lk30mp4rv47jojjdxnnfni377v5ekhe7cihlwdkyyrrf2on5tsjreqrbutbzh9i24tmzlemcqvog51zxatax4oqxisrw62k0hklsyxllddras7q191ylxqjmbsef3hibsnzqhuuje4po8ty4oe749893plo3pn77t8eli88cg6g31mbhltf31sxwp873lori0vkkaweblgihzg0a00y2qss3ztd9woo2lh9eqh18o3suz58i7ihjt1ziq3asya14fc7gx5ieb3dyvxxzw9eoiiu7uyn0ydhgq1ww7hrp8eyqsb15ghs93704mm6vggdizv28588qzllmeei13xr6395y37opb0ey13upsb4jrj2btpvvkpb8sfa70qrbjsbi09i4kr55fm0ztgo4oob4pszea76vp4yoozk19vdj7ybnu24hh0gav2d2ma6zowyakmu1g8z0b63ztbuthwtjrhj8p8ddldklnis0ln == \e\m\b\i\v\z\k\3\c\y\f\o\c\v\7\l\u\n\x\v\u\f\7\a\a\9\l\k\3\0\m\p\4\r\v\4\7\j\o\j\j\d\x\n\n\f\n\i\3\7\7\v\5\e\k\h\e\7\c\i\h\l\w\d\k\y\y\r\r\f\2\o\n\5\t\s\j\r\e\q\r\b\u\t\b\z\h\9\i\2\4\t\m\z\l\e\m\c\q\v\o\g\5\1\z\x\a\t\a\x\4\o\q\x\i\s\r\w\6\2\k\0\h\k\l\s\y\x\l\l\d\d\r\a\s\7\q\1\9\1\y\l\x\q\j\m\b\s\e\f\3\h\i\b\s\n\z\q\h\u\u\j\e\4\p\o\8\t\y\4\o\e\7\4\9\8\9\3\p\l\o\3\p\n\7\7\t\8\e\l\i\8\8\c\g\6\g\3\1\m\b\h\l\t\f\3\1\s\x\w\p\8\7\3\l\o\r\i\0\v\k\k\a\w\e\b\l\g\i\h\z\g\0\a\0\0\y\2\q\s\s\3\z\t\d\9\w\o\o\2\l\h\9\e\q\h\1\8\o\3\s\u\z\5\8\i\7\i\h\j\t\1\z\i\q\3\a\s\y\a\1\4\f\c\7\g\x\5\i\e\b\3\d\y\v\x\x\z\w\9\e\o\i\i\u\7\u\y\n\0\y\d\h\g\q\1\w\w\7\h\r\p\8\e\y\q\s\b\1\5\g\h\s\9\3\7\0\4\m\m\6\v\g\g\d\i\z\v\2\8\5\8\8\q\z\l\l\m\e\e\i\1\3\x\r\6\3\9\5\y\3\7\o\p\b\0\e\y\1\3\u\p\s\b\4\j\r\j\2\b\t\p\v\v\k\p\b\8\s\f\a\7\0\q\r\b\j\s\b\i\0\9\i\4\k\r\5\5\f\m\0\z\t\g\o\4\o\o\b\4\p\s\z\e\a\7\6\v\p\4\y\o\o\z\k\1\9\v\d\j\7\y\b\n\u\2\4\h\h\0\g\a\v\2\d\2\m\a\6\z\o\w\y\a\k\m\u\1\g\8\z\0\b\6\3\z\t\b\u\t\h\w\t\j\r\h\j\8\p\8\d\d\l\d\k\l\n\i\s\0\l\n ]] 00:11:40.377 12:23:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:40.377 12:23:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:40.636 [2024-07-12 12:23:09.505584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:40.636 [2024-07-12 12:23:09.505707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75600 ] 00:11:40.636 [2024-07-12 12:23:09.644559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.912 [2024-07-12 12:23:09.723801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.912 [2024-07-12 12:23:09.779212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:41.173  Copying: 512/512 [B] (average 250 kBps) 00:11:41.173 00:11:41.173 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ embivzk3cyfocv7lunxvuf7aa9lk30mp4rv47jojjdxnnfni377v5ekhe7cihlwdkyyrrf2on5tsjreqrbutbzh9i24tmzlemcqvog51zxatax4oqxisrw62k0hklsyxllddras7q191ylxqjmbsef3hibsnzqhuuje4po8ty4oe749893plo3pn77t8eli88cg6g31mbhltf31sxwp873lori0vkkaweblgihzg0a00y2qss3ztd9woo2lh9eqh18o3suz58i7ihjt1ziq3asya14fc7gx5ieb3dyvxxzw9eoiiu7uyn0ydhgq1ww7hrp8eyqsb15ghs93704mm6vggdizv28588qzllmeei13xr6395y37opb0ey13upsb4jrj2btpvvkpb8sfa70qrbjsbi09i4kr55fm0ztgo4oob4pszea76vp4yoozk19vdj7ybnu24hh0gav2d2ma6zowyakmu1g8z0b63ztbuthwtjrhj8p8ddldklnis0ln == \e\m\b\i\v\z\k\3\c\y\f\o\c\v\7\l\u\n\x\v\u\f\7\a\a\9\l\k\3\0\m\p\4\r\v\4\7\j\o\j\j\d\x\n\n\f\n\i\3\7\7\v\5\e\k\h\e\7\c\i\h\l\w\d\k\y\y\r\r\f\2\o\n\5\t\s\j\r\e\q\r\b\u\t\b\z\h\9\i\2\4\t\m\z\l\e\m\c\q\v\o\g\5\1\z\x\a\t\a\x\4\o\q\x\i\s\r\w\6\2\k\0\h\k\l\s\y\x\l\l\d\d\r\a\s\7\q\1\9\1\y\l\x\q\j\m\b\s\e\f\3\h\i\b\s\n\z\q\h\u\u\j\e\4\p\o\8\t\y\4\o\e\7\4\9\8\9\3\p\l\o\3\p\n\7\7\t\8\e\l\i\8\8\c\g\6\g\3\1\m\b\h\l\t\f\3\1\s\x\w\p\8\7\3\l\o\r\i\0\v\k\k\a\w\e\b\l\g\i\h\z\g\0\a\0\0\y\2\q\s\s\3\z\t\d\9\w\o\o\2\l\h\9\e\q\h\1\8\o\3\s\u\z\5\8\i\7\i\h\j\t\1\z\i\q\3\a\s\y\a\1\4\f\c\7\g\x\5\i\e\b\3\d\y\v\x\x\z\w\9\e\o\i\i\u\7\u\y\n\0\y\d\h\g\q\1\w\w\7\h\r\p\8\e\y\q\s\b\1\5\g\h\s\9\3\7\0\4\m\m\6\v\g\g\d\i\z\v\2\8\5\8\8\q\z\l\l\m\e\e\i\1\3\x\r\6\3\9\5\y\3\7\o\p\b\0\e\y\1\3\u\p\s\b\4\j\r\j\2\b\t\p\v\v\k\p\b\8\s\f\a\7\0\q\r\b\j\s\b\i\0\9\i\4\k\r\5\5\f\m\0\z\t\g\o\4\o\o\b\4\p\s\z\e\a\7\6\v\p\4\y\o\o\z\k\1\9\v\d\j\7\y\b\n\u\2\4\h\h\0\g\a\v\2\d\2\m\a\6\z\o\w\y\a\k\m\u\1\g\8\z\0\b\6\3\z\t\b\u\t\h\w\t\j\r\h\j\8\p\8\d\d\l\d\k\l\n\i\s\0\l\n ]] 00:11:41.173 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:41.173 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:41.173 [2024-07-12 12:23:10.092235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:41.173 [2024-07-12 12:23:10.092367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75607 ] 00:11:41.173 [2024-07-12 12:23:10.229442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.431 [2024-07-12 12:23:10.314046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.431 [2024-07-12 12:23:10.370092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:41.690  Copying: 512/512 [B] (average 500 kBps) 00:11:41.690 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ embivzk3cyfocv7lunxvuf7aa9lk30mp4rv47jojjdxnnfni377v5ekhe7cihlwdkyyrrf2on5tsjreqrbutbzh9i24tmzlemcqvog51zxatax4oqxisrw62k0hklsyxllddras7q191ylxqjmbsef3hibsnzqhuuje4po8ty4oe749893plo3pn77t8eli88cg6g31mbhltf31sxwp873lori0vkkaweblgihzg0a00y2qss3ztd9woo2lh9eqh18o3suz58i7ihjt1ziq3asya14fc7gx5ieb3dyvxxzw9eoiiu7uyn0ydhgq1ww7hrp8eyqsb15ghs93704mm6vggdizv28588qzllmeei13xr6395y37opb0ey13upsb4jrj2btpvvkpb8sfa70qrbjsbi09i4kr55fm0ztgo4oob4pszea76vp4yoozk19vdj7ybnu24hh0gav2d2ma6zowyakmu1g8z0b63ztbuthwtjrhj8p8ddldklnis0ln == \e\m\b\i\v\z\k\3\c\y\f\o\c\v\7\l\u\n\x\v\u\f\7\a\a\9\l\k\3\0\m\p\4\r\v\4\7\j\o\j\j\d\x\n\n\f\n\i\3\7\7\v\5\e\k\h\e\7\c\i\h\l\w\d\k\y\y\r\r\f\2\o\n\5\t\s\j\r\e\q\r\b\u\t\b\z\h\9\i\2\4\t\m\z\l\e\m\c\q\v\o\g\5\1\z\x\a\t\a\x\4\o\q\x\i\s\r\w\6\2\k\0\h\k\l\s\y\x\l\l\d\d\r\a\s\7\q\1\9\1\y\l\x\q\j\m\b\s\e\f\3\h\i\b\s\n\z\q\h\u\u\j\e\4\p\o\8\t\y\4\o\e\7\4\9\8\9\3\p\l\o\3\p\n\7\7\t\8\e\l\i\8\8\c\g\6\g\3\1\m\b\h\l\t\f\3\1\s\x\w\p\8\7\3\l\o\r\i\0\v\k\k\a\w\e\b\l\g\i\h\z\g\0\a\0\0\y\2\q\s\s\3\z\t\d\9\w\o\o\2\l\h\9\e\q\h\1\8\o\3\s\u\z\5\8\i\7\i\h\j\t\1\z\i\q\3\a\s\y\a\1\4\f\c\7\g\x\5\i\e\b\3\d\y\v\x\x\z\w\9\e\o\i\i\u\7\u\y\n\0\y\d\h\g\q\1\w\w\7\h\r\p\8\e\y\q\s\b\1\5\g\h\s\9\3\7\0\4\m\m\6\v\g\g\d\i\z\v\2\8\5\8\8\q\z\l\l\m\e\e\i\1\3\x\r\6\3\9\5\y\3\7\o\p\b\0\e\y\1\3\u\p\s\b\4\j\r\j\2\b\t\p\v\v\k\p\b\8\s\f\a\7\0\q\r\b\j\s\b\i\0\9\i\4\k\r\5\5\f\m\0\z\t\g\o\4\o\o\b\4\p\s\z\e\a\7\6\v\p\4\y\o\o\z\k\1\9\v\d\j\7\y\b\n\u\2\4\h\h\0\g\a\v\2\d\2\m\a\6\z\o\w\y\a\k\m\u\1\g\8\z\0\b\6\3\z\t\b\u\t\h\w\t\j\r\h\j\8\p\8\d\d\l\d\k\l\n\i\s\0\l\n ]] 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:41.690 12:23:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:41.690 [2024-07-12 12:23:10.690716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:41.690 [2024-07-12 12:23:10.690844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75615 ] 00:11:41.948 [2024-07-12 12:23:10.829133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.948 [2024-07-12 12:23:10.920844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.948 [2024-07-12 12:23:10.973393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:42.205  Copying: 512/512 [B] (average 500 kBps) 00:11:42.205 00:11:42.205 12:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 58mfmno8fwuvs1s5kxv6rv6oon6i7c77krx1pn1945qphj3ap86xiliz5u523lngfag7v3tvf92jm5bk4mjw2daltrkes8103oveinr8yya3pyar43j08fqbu88f29ynolsfp23xc4opaopnrszvsnsw8qw2n12vtakp8aodn1rly3gqnb2igc89xohcdgqt6rcnhg2hq2ojkkx31ok0cxt8yiu307x8yiflvtzr11w3y99dle2t62j3ecdhut0ttdi1le5ou8z5giw2qa8tkoli7yk8myb9491wazgrinarckuetfwxboy6ict4702htrohb4s4wj2t1oskqrc3ww64qq9xeyxalnvrhla72e5rha98l6a68g8jckai0f9ic3685xihue2mab0jeudv8p3ar5e1jfsl9xx51pzbd2wt6bsg5c9voan244bhobbij5ecjnelecwgwkur6h3v0f1jvoic7vnuzkdp58srccmvewaz0igqw2tnqyrrrhi9 == \5\8\m\f\m\n\o\8\f\w\u\v\s\1\s\5\k\x\v\6\r\v\6\o\o\n\6\i\7\c\7\7\k\r\x\1\p\n\1\9\4\5\q\p\h\j\3\a\p\8\6\x\i\l\i\z\5\u\5\2\3\l\n\g\f\a\g\7\v\3\t\v\f\9\2\j\m\5\b\k\4\m\j\w\2\d\a\l\t\r\k\e\s\8\1\0\3\o\v\e\i\n\r\8\y\y\a\3\p\y\a\r\4\3\j\0\8\f\q\b\u\8\8\f\2\9\y\n\o\l\s\f\p\2\3\x\c\4\o\p\a\o\p\n\r\s\z\v\s\n\s\w\8\q\w\2\n\1\2\v\t\a\k\p\8\a\o\d\n\1\r\l\y\3\g\q\n\b\2\i\g\c\8\9\x\o\h\c\d\g\q\t\6\r\c\n\h\g\2\h\q\2\o\j\k\k\x\3\1\o\k\0\c\x\t\8\y\i\u\3\0\7\x\8\y\i\f\l\v\t\z\r\1\1\w\3\y\9\9\d\l\e\2\t\6\2\j\3\e\c\d\h\u\t\0\t\t\d\i\1\l\e\5\o\u\8\z\5\g\i\w\2\q\a\8\t\k\o\l\i\7\y\k\8\m\y\b\9\4\9\1\w\a\z\g\r\i\n\a\r\c\k\u\e\t\f\w\x\b\o\y\6\i\c\t\4\7\0\2\h\t\r\o\h\b\4\s\4\w\j\2\t\1\o\s\k\q\r\c\3\w\w\6\4\q\q\9\x\e\y\x\a\l\n\v\r\h\l\a\7\2\e\5\r\h\a\9\8\l\6\a\6\8\g\8\j\c\k\a\i\0\f\9\i\c\3\6\8\5\x\i\h\u\e\2\m\a\b\0\j\e\u\d\v\8\p\3\a\r\5\e\1\j\f\s\l\9\x\x\5\1\p\z\b\d\2\w\t\6\b\s\g\5\c\9\v\o\a\n\2\4\4\b\h\o\b\b\i\j\5\e\c\j\n\e\l\e\c\w\g\w\k\u\r\6\h\3\v\0\f\1\j\v\o\i\c\7\v\n\u\z\k\d\p\5\8\s\r\c\c\m\v\e\w\a\z\0\i\g\q\w\2\t\n\q\y\r\r\r\h\i\9 ]] 00:11:42.205 12:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:42.205 12:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:42.205 [2024-07-12 12:23:11.274363] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:42.205 [2024-07-12 12:23:11.274494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75622 ] 00:11:42.461 [2024-07-12 12:23:11.412902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.461 [2024-07-12 12:23:11.487888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.461 [2024-07-12 12:23:11.540523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:42.718  Copying: 512/512 [B] (average 500 kBps) 00:11:42.718 00:11:42.718 12:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 58mfmno8fwuvs1s5kxv6rv6oon6i7c77krx1pn1945qphj3ap86xiliz5u523lngfag7v3tvf92jm5bk4mjw2daltrkes8103oveinr8yya3pyar43j08fqbu88f29ynolsfp23xc4opaopnrszvsnsw8qw2n12vtakp8aodn1rly3gqnb2igc89xohcdgqt6rcnhg2hq2ojkkx31ok0cxt8yiu307x8yiflvtzr11w3y99dle2t62j3ecdhut0ttdi1le5ou8z5giw2qa8tkoli7yk8myb9491wazgrinarckuetfwxboy6ict4702htrohb4s4wj2t1oskqrc3ww64qq9xeyxalnvrhla72e5rha98l6a68g8jckai0f9ic3685xihue2mab0jeudv8p3ar5e1jfsl9xx51pzbd2wt6bsg5c9voan244bhobbij5ecjnelecwgwkur6h3v0f1jvoic7vnuzkdp58srccmvewaz0igqw2tnqyrrrhi9 == \5\8\m\f\m\n\o\8\f\w\u\v\s\1\s\5\k\x\v\6\r\v\6\o\o\n\6\i\7\c\7\7\k\r\x\1\p\n\1\9\4\5\q\p\h\j\3\a\p\8\6\x\i\l\i\z\5\u\5\2\3\l\n\g\f\a\g\7\v\3\t\v\f\9\2\j\m\5\b\k\4\m\j\w\2\d\a\l\t\r\k\e\s\8\1\0\3\o\v\e\i\n\r\8\y\y\a\3\p\y\a\r\4\3\j\0\8\f\q\b\u\8\8\f\2\9\y\n\o\l\s\f\p\2\3\x\c\4\o\p\a\o\p\n\r\s\z\v\s\n\s\w\8\q\w\2\n\1\2\v\t\a\k\p\8\a\o\d\n\1\r\l\y\3\g\q\n\b\2\i\g\c\8\9\x\o\h\c\d\g\q\t\6\r\c\n\h\g\2\h\q\2\o\j\k\k\x\3\1\o\k\0\c\x\t\8\y\i\u\3\0\7\x\8\y\i\f\l\v\t\z\r\1\1\w\3\y\9\9\d\l\e\2\t\6\2\j\3\e\c\d\h\u\t\0\t\t\d\i\1\l\e\5\o\u\8\z\5\g\i\w\2\q\a\8\t\k\o\l\i\7\y\k\8\m\y\b\9\4\9\1\w\a\z\g\r\i\n\a\r\c\k\u\e\t\f\w\x\b\o\y\6\i\c\t\4\7\0\2\h\t\r\o\h\b\4\s\4\w\j\2\t\1\o\s\k\q\r\c\3\w\w\6\4\q\q\9\x\e\y\x\a\l\n\v\r\h\l\a\7\2\e\5\r\h\a\9\8\l\6\a\6\8\g\8\j\c\k\a\i\0\f\9\i\c\3\6\8\5\x\i\h\u\e\2\m\a\b\0\j\e\u\d\v\8\p\3\a\r\5\e\1\j\f\s\l\9\x\x\5\1\p\z\b\d\2\w\t\6\b\s\g\5\c\9\v\o\a\n\2\4\4\b\h\o\b\b\i\j\5\e\c\j\n\e\l\e\c\w\g\w\k\u\r\6\h\3\v\0\f\1\j\v\o\i\c\7\v\n\u\z\k\d\p\5\8\s\r\c\c\m\v\e\w\a\z\0\i\g\q\w\2\t\n\q\y\r\r\r\h\i\9 ]] 00:11:42.718 12:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:42.718 12:23:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:42.976 [2024-07-12 12:23:11.853286] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:42.976 [2024-07-12 12:23:11.853400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75630 ] 00:11:42.976 [2024-07-12 12:23:11.987514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.233 [2024-07-12 12:23:12.069704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.233 [2024-07-12 12:23:12.124951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:43.491  Copying: 512/512 [B] (average 500 kBps) 00:11:43.491 00:11:43.491 12:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 58mfmno8fwuvs1s5kxv6rv6oon6i7c77krx1pn1945qphj3ap86xiliz5u523lngfag7v3tvf92jm5bk4mjw2daltrkes8103oveinr8yya3pyar43j08fqbu88f29ynolsfp23xc4opaopnrszvsnsw8qw2n12vtakp8aodn1rly3gqnb2igc89xohcdgqt6rcnhg2hq2ojkkx31ok0cxt8yiu307x8yiflvtzr11w3y99dle2t62j3ecdhut0ttdi1le5ou8z5giw2qa8tkoli7yk8myb9491wazgrinarckuetfwxboy6ict4702htrohb4s4wj2t1oskqrc3ww64qq9xeyxalnvrhla72e5rha98l6a68g8jckai0f9ic3685xihue2mab0jeudv8p3ar5e1jfsl9xx51pzbd2wt6bsg5c9voan244bhobbij5ecjnelecwgwkur6h3v0f1jvoic7vnuzkdp58srccmvewaz0igqw2tnqyrrrhi9 == \5\8\m\f\m\n\o\8\f\w\u\v\s\1\s\5\k\x\v\6\r\v\6\o\o\n\6\i\7\c\7\7\k\r\x\1\p\n\1\9\4\5\q\p\h\j\3\a\p\8\6\x\i\l\i\z\5\u\5\2\3\l\n\g\f\a\g\7\v\3\t\v\f\9\2\j\m\5\b\k\4\m\j\w\2\d\a\l\t\r\k\e\s\8\1\0\3\o\v\e\i\n\r\8\y\y\a\3\p\y\a\r\4\3\j\0\8\f\q\b\u\8\8\f\2\9\y\n\o\l\s\f\p\2\3\x\c\4\o\p\a\o\p\n\r\s\z\v\s\n\s\w\8\q\w\2\n\1\2\v\t\a\k\p\8\a\o\d\n\1\r\l\y\3\g\q\n\b\2\i\g\c\8\9\x\o\h\c\d\g\q\t\6\r\c\n\h\g\2\h\q\2\o\j\k\k\x\3\1\o\k\0\c\x\t\8\y\i\u\3\0\7\x\8\y\i\f\l\v\t\z\r\1\1\w\3\y\9\9\d\l\e\2\t\6\2\j\3\e\c\d\h\u\t\0\t\t\d\i\1\l\e\5\o\u\8\z\5\g\i\w\2\q\a\8\t\k\o\l\i\7\y\k\8\m\y\b\9\4\9\1\w\a\z\g\r\i\n\a\r\c\k\u\e\t\f\w\x\b\o\y\6\i\c\t\4\7\0\2\h\t\r\o\h\b\4\s\4\w\j\2\t\1\o\s\k\q\r\c\3\w\w\6\4\q\q\9\x\e\y\x\a\l\n\v\r\h\l\a\7\2\e\5\r\h\a\9\8\l\6\a\6\8\g\8\j\c\k\a\i\0\f\9\i\c\3\6\8\5\x\i\h\u\e\2\m\a\b\0\j\e\u\d\v\8\p\3\a\r\5\e\1\j\f\s\l\9\x\x\5\1\p\z\b\d\2\w\t\6\b\s\g\5\c\9\v\o\a\n\2\4\4\b\h\o\b\b\i\j\5\e\c\j\n\e\l\e\c\w\g\w\k\u\r\6\h\3\v\0\f\1\j\v\o\i\c\7\v\n\u\z\k\d\p\5\8\s\r\c\c\m\v\e\w\a\z\0\i\g\q\w\2\t\n\q\y\r\r\r\h\i\9 ]] 00:11:43.491 12:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:43.491 12:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:43.491 [2024-07-12 12:23:12.417450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:43.491 [2024-07-12 12:23:12.417573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75643 ] 00:11:43.491 [2024-07-12 12:23:12.548808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.749 [2024-07-12 12:23:12.625399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.749 [2024-07-12 12:23:12.678622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:44.007  Copying: 512/512 [B] (average 250 kBps) 00:11:44.007 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 58mfmno8fwuvs1s5kxv6rv6oon6i7c77krx1pn1945qphj3ap86xiliz5u523lngfag7v3tvf92jm5bk4mjw2daltrkes8103oveinr8yya3pyar43j08fqbu88f29ynolsfp23xc4opaopnrszvsnsw8qw2n12vtakp8aodn1rly3gqnb2igc89xohcdgqt6rcnhg2hq2ojkkx31ok0cxt8yiu307x8yiflvtzr11w3y99dle2t62j3ecdhut0ttdi1le5ou8z5giw2qa8tkoli7yk8myb9491wazgrinarckuetfwxboy6ict4702htrohb4s4wj2t1oskqrc3ww64qq9xeyxalnvrhla72e5rha98l6a68g8jckai0f9ic3685xihue2mab0jeudv8p3ar5e1jfsl9xx51pzbd2wt6bsg5c9voan244bhobbij5ecjnelecwgwkur6h3v0f1jvoic7vnuzkdp58srccmvewaz0igqw2tnqyrrrhi9 == \5\8\m\f\m\n\o\8\f\w\u\v\s\1\s\5\k\x\v\6\r\v\6\o\o\n\6\i\7\c\7\7\k\r\x\1\p\n\1\9\4\5\q\p\h\j\3\a\p\8\6\x\i\l\i\z\5\u\5\2\3\l\n\g\f\a\g\7\v\3\t\v\f\9\2\j\m\5\b\k\4\m\j\w\2\d\a\l\t\r\k\e\s\8\1\0\3\o\v\e\i\n\r\8\y\y\a\3\p\y\a\r\4\3\j\0\8\f\q\b\u\8\8\f\2\9\y\n\o\l\s\f\p\2\3\x\c\4\o\p\a\o\p\n\r\s\z\v\s\n\s\w\8\q\w\2\n\1\2\v\t\a\k\p\8\a\o\d\n\1\r\l\y\3\g\q\n\b\2\i\g\c\8\9\x\o\h\c\d\g\q\t\6\r\c\n\h\g\2\h\q\2\o\j\k\k\x\3\1\o\k\0\c\x\t\8\y\i\u\3\0\7\x\8\y\i\f\l\v\t\z\r\1\1\w\3\y\9\9\d\l\e\2\t\6\2\j\3\e\c\d\h\u\t\0\t\t\d\i\1\l\e\5\o\u\8\z\5\g\i\w\2\q\a\8\t\k\o\l\i\7\y\k\8\m\y\b\9\4\9\1\w\a\z\g\r\i\n\a\r\c\k\u\e\t\f\w\x\b\o\y\6\i\c\t\4\7\0\2\h\t\r\o\h\b\4\s\4\w\j\2\t\1\o\s\k\q\r\c\3\w\w\6\4\q\q\9\x\e\y\x\a\l\n\v\r\h\l\a\7\2\e\5\r\h\a\9\8\l\6\a\6\8\g\8\j\c\k\a\i\0\f\9\i\c\3\6\8\5\x\i\h\u\e\2\m\a\b\0\j\e\u\d\v\8\p\3\a\r\5\e\1\j\f\s\l\9\x\x\5\1\p\z\b\d\2\w\t\6\b\s\g\5\c\9\v\o\a\n\2\4\4\b\h\o\b\b\i\j\5\e\c\j\n\e\l\e\c\w\g\w\k\u\r\6\h\3\v\0\f\1\j\v\o\i\c\7\v\n\u\z\k\d\p\5\8\s\r\c\c\m\v\e\w\a\z\0\i\g\q\w\2\t\n\q\y\r\r\r\h\i\9 ]] 00:11:44.007 00:11:44.007 real 0m4.615s 00:11:44.007 user 0m2.461s 00:11:44.007 sys 0m1.169s 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:44.007 ************************************ 00:11:44.007 END TEST dd_flags_misc_forced_aio 00:11:44.007 ************************************ 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:44.007 00:11:44.007 real 0m21.261s 00:11:44.007 user 0m10.347s 00:11:44.007 sys 0m6.795s 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.007 ************************************ 00:11:44.007 END TEST spdk_dd_posix 00:11:44.007 ************************************ 00:11:44.007 12:23:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:44.007 12:23:13 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:44.007 12:23:13 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:44.007 12:23:13 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.007 12:23:13 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.007 12:23:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:44.007 ************************************ 00:11:44.007 START TEST spdk_dd_malloc 00:11:44.007 ************************************ 00:11:44.007 12:23:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:44.264 * Looking for test storage... 00:11:44.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:44.264 12:23:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 ************************************ 00:11:44.265 START TEST dd_malloc_copy 00:11:44.265 ************************************ 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:44.265 12:23:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:44.265 [2024-07-12 12:23:13.178830] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:44.265 [2024-07-12 12:23:13.178940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75710 ] 00:11:44.265 { 00:11:44.265 "subsystems": [ 00:11:44.265 { 00:11:44.265 "subsystem": "bdev", 00:11:44.265 "config": [ 00:11:44.265 { 00:11:44.265 "params": { 00:11:44.265 "block_size": 512, 00:11:44.265 "num_blocks": 1048576, 00:11:44.265 "name": "malloc0" 00:11:44.265 }, 00:11:44.265 "method": "bdev_malloc_create" 00:11:44.265 }, 00:11:44.265 { 00:11:44.265 "params": { 00:11:44.265 "block_size": 512, 00:11:44.265 "num_blocks": 1048576, 00:11:44.265 "name": "malloc1" 00:11:44.265 }, 00:11:44.265 "method": "bdev_malloc_create" 00:11:44.265 }, 00:11:44.265 { 00:11:44.265 "method": "bdev_wait_for_examine" 00:11:44.265 } 00:11:44.265 ] 00:11:44.265 } 00:11:44.265 ] 00:11:44.265 } 00:11:44.265 [2024-07-12 12:23:13.321439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.522 [2024-07-12 12:23:13.409690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.522 [2024-07-12 12:23:13.466155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:47.965  Copying: 210/512 [MB] (210 MBps) Copying: 423/512 [MB] (212 MBps) Copying: 512/512 [MB] (average 211 MBps) 00:11:47.965 00:11:47.965 12:23:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:47.965 12:23:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:47.965 12:23:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:47.965 12:23:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:47.965 [2024-07-12 12:23:16.863039] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:47.965 [2024-07-12 12:23:16.863161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75759 ] 00:11:47.965 { 00:11:47.965 "subsystems": [ 00:11:47.965 { 00:11:47.965 "subsystem": "bdev", 00:11:47.965 "config": [ 00:11:47.965 { 00:11:47.965 "params": { 00:11:47.965 "block_size": 512, 00:11:47.965 "num_blocks": 1048576, 00:11:47.965 "name": "malloc0" 00:11:47.965 }, 00:11:47.965 "method": "bdev_malloc_create" 00:11:47.965 }, 00:11:47.965 { 00:11:47.965 "params": { 00:11:47.965 "block_size": 512, 00:11:47.965 "num_blocks": 1048576, 00:11:47.965 "name": "malloc1" 00:11:47.965 }, 00:11:47.965 "method": "bdev_malloc_create" 00:11:47.965 }, 00:11:47.965 { 00:11:47.965 "method": "bdev_wait_for_examine" 00:11:47.965 } 00:11:47.965 ] 00:11:47.965 } 00:11:47.965 ] 00:11:47.965 } 00:11:47.965 [2024-07-12 12:23:16.996221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.223 [2024-07-12 12:23:17.055251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.223 [2024-07-12 12:23:17.109726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:51.561  Copying: 197/512 [MB] (197 MBps) Copying: 403/512 [MB] (205 MBps) Copying: 512/512 [MB] (average 201 MBps) 00:11:51.561 00:11:51.561 00:11:51.561 real 0m7.457s 00:11:51.561 user 0m6.449s 00:11:51.561 sys 0m0.832s 00:11:51.561 12:23:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.561 12:23:20 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:51.561 ************************************ 00:11:51.561 END TEST dd_malloc_copy 00:11:51.561 ************************************ 00:11:51.561 12:23:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:11:51.561 00:11:51.561 real 0m7.587s 00:11:51.561 user 0m6.497s 00:11:51.561 sys 0m0.915s 00:11:51.561 12:23:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.561 ************************************ 00:11:51.561 END TEST spdk_dd_malloc 00:11:51.561 ************************************ 00:11:51.561 12:23:20 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:51.824 12:23:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:51.824 12:23:20 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:51.824 12:23:20 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:51.824 12:23:20 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.824 12:23:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:51.824 ************************************ 00:11:51.824 START TEST spdk_dd_bdev_to_bdev 00:11:51.824 ************************************ 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:51.824 * Looking for test storage... 00:11:51.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:51.824 ************************************ 00:11:51.824 START TEST dd_inflate_file 00:11:51.824 ************************************ 00:11:51.824 12:23:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:51.824 [2024-07-12 12:23:20.799969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:51.824 [2024-07-12 12:23:20.800042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75869 ] 00:11:52.085 [2024-07-12 12:23:20.932740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.085 [2024-07-12 12:23:20.996558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.085 [2024-07-12 12:23:21.049836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:52.344  Copying: 64/64 [MB] (average 1560 MBps) 00:11:52.344 00:11:52.344 00:11:52.344 real 0m0.582s 00:11:52.344 user 0m0.340s 00:11:52.344 sys 0m0.296s 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:52.344 ************************************ 00:11:52.344 END TEST dd_inflate_file 00:11:52.344 ************************************ 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:52.344 ************************************ 00:11:52.344 START TEST dd_copy_to_out_bdev 00:11:52.344 ************************************ 00:11:52.344 12:23:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:52.602 { 00:11:52.602 "subsystems": [ 00:11:52.602 { 00:11:52.602 "subsystem": "bdev", 00:11:52.602 "config": [ 00:11:52.602 { 00:11:52.602 "params": { 00:11:52.602 "trtype": "pcie", 00:11:52.602 "traddr": "0000:00:10.0", 00:11:52.602 "name": "Nvme0" 00:11:52.602 }, 00:11:52.602 "method": "bdev_nvme_attach_controller" 00:11:52.602 }, 00:11:52.602 { 00:11:52.602 "params": { 00:11:52.602 "trtype": "pcie", 00:11:52.602 "traddr": "0000:00:11.0", 00:11:52.602 "name": "Nvme1" 00:11:52.602 }, 00:11:52.602 "method": "bdev_nvme_attach_controller" 00:11:52.602 }, 00:11:52.602 { 00:11:52.602 "method": "bdev_wait_for_examine" 00:11:52.602 } 00:11:52.602 ] 00:11:52.602 } 00:11:52.602 ] 00:11:52.602 } 00:11:52.602 [2024-07-12 12:23:21.457546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:52.602 [2024-07-12 12:23:21.457682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75899 ] 00:11:52.602 [2024-07-12 12:23:21.601204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.860 [2024-07-12 12:23:21.689277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.860 [2024-07-12 12:23:21.744278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:54.233  Copying: 56/64 [MB] (56 MBps) Copying: 64/64 [MB] (average 56 MBps) 00:11:54.233 00:11:54.233 00:11:54.233 real 0m1.909s 00:11:54.233 user 0m1.664s 00:11:54.233 sys 0m1.497s 00:11:54.233 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.233 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:54.233 ************************************ 00:11:54.233 END TEST dd_copy_to_out_bdev 00:11:54.233 ************************************ 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:54.491 ************************************ 00:11:54.491 START TEST dd_offset_magic 00:11:54.491 ************************************ 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:54.491 12:23:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:54.491 [2024-07-12 12:23:23.415296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:54.491 [2024-07-12 12:23:23.415425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75942 ] 00:11:54.491 { 00:11:54.491 "subsystems": [ 00:11:54.491 { 00:11:54.491 "subsystem": "bdev", 00:11:54.491 "config": [ 00:11:54.491 { 00:11:54.491 "params": { 00:11:54.491 "trtype": "pcie", 00:11:54.491 "traddr": "0000:00:10.0", 00:11:54.491 "name": "Nvme0" 00:11:54.491 }, 00:11:54.491 "method": "bdev_nvme_attach_controller" 00:11:54.491 }, 00:11:54.491 { 00:11:54.491 "params": { 00:11:54.491 "trtype": "pcie", 00:11:54.491 "traddr": "0000:00:11.0", 00:11:54.491 "name": "Nvme1" 00:11:54.491 }, 00:11:54.491 "method": "bdev_nvme_attach_controller" 00:11:54.491 }, 00:11:54.491 { 00:11:54.491 "method": "bdev_wait_for_examine" 00:11:54.491 } 00:11:54.491 ] 00:11:54.491 } 00:11:54.491 ] 00:11:54.491 } 00:11:54.491 [2024-07-12 12:23:23.550730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.749 [2024-07-12 12:23:23.636180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.749 [2024-07-12 12:23:23.691320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:55.265  Copying: 65/65 [MB] (average 866 MBps) 00:11:55.265 00:11:55.265 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:55.265 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:55.265 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:55.265 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:55.265 [2024-07-12 12:23:24.258706] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:55.265 [2024-07-12 12:23:24.258847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75962 ] 00:11:55.265 { 00:11:55.265 "subsystems": [ 00:11:55.265 { 00:11:55.265 "subsystem": "bdev", 00:11:55.265 "config": [ 00:11:55.265 { 00:11:55.265 "params": { 00:11:55.265 "trtype": "pcie", 00:11:55.265 "traddr": "0000:00:10.0", 00:11:55.265 "name": "Nvme0" 00:11:55.265 }, 00:11:55.265 "method": "bdev_nvme_attach_controller" 00:11:55.265 }, 00:11:55.265 { 00:11:55.265 "params": { 00:11:55.265 "trtype": "pcie", 00:11:55.265 "traddr": "0000:00:11.0", 00:11:55.265 "name": "Nvme1" 00:11:55.265 }, 00:11:55.265 "method": "bdev_nvme_attach_controller" 00:11:55.265 }, 00:11:55.265 { 00:11:55.265 "method": "bdev_wait_for_examine" 00:11:55.265 } 00:11:55.265 ] 00:11:55.265 } 00:11:55.265 ] 00:11:55.265 } 00:11:55.523 [2024-07-12 12:23:24.400137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.523 [2024-07-12 12:23:24.481422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.523 [2024-07-12 12:23:24.533917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:56.040  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:56.040 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:56.040 12:23:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:56.040 [2024-07-12 12:23:24.978270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:56.040 [2024-07-12 12:23:24.978574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75984 ] 00:11:56.041 { 00:11:56.041 "subsystems": [ 00:11:56.041 { 00:11:56.041 "subsystem": "bdev", 00:11:56.041 "config": [ 00:11:56.041 { 00:11:56.041 "params": { 00:11:56.041 "trtype": "pcie", 00:11:56.041 "traddr": "0000:00:10.0", 00:11:56.041 "name": "Nvme0" 00:11:56.041 }, 00:11:56.041 "method": "bdev_nvme_attach_controller" 00:11:56.041 }, 00:11:56.041 { 00:11:56.041 "params": { 00:11:56.041 "trtype": "pcie", 00:11:56.041 "traddr": "0000:00:11.0", 00:11:56.041 "name": "Nvme1" 00:11:56.041 }, 00:11:56.041 "method": "bdev_nvme_attach_controller" 00:11:56.041 }, 00:11:56.041 { 00:11:56.041 "method": "bdev_wait_for_examine" 00:11:56.041 } 00:11:56.041 ] 00:11:56.041 } 00:11:56.041 ] 00:11:56.041 } 00:11:56.041 [2024-07-12 12:23:25.117208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.299 [2024-07-12 12:23:25.203477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.299 [2024-07-12 12:23:25.260513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:56.816  Copying: 65/65 [MB] (average 1000 MBps) 00:11:56.816 00:11:56.816 12:23:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:56.816 12:23:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:56.816 12:23:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:56.816 12:23:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:56.816 [2024-07-12 12:23:25.813712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:56.816 [2024-07-12 12:23:25.813830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75999 ] 00:11:56.816 { 00:11:56.816 "subsystems": [ 00:11:56.816 { 00:11:56.816 "subsystem": "bdev", 00:11:56.816 "config": [ 00:11:56.816 { 00:11:56.816 "params": { 00:11:56.816 "trtype": "pcie", 00:11:56.816 "traddr": "0000:00:10.0", 00:11:56.816 "name": "Nvme0" 00:11:56.816 }, 00:11:56.816 "method": "bdev_nvme_attach_controller" 00:11:56.816 }, 00:11:56.816 { 00:11:56.816 "params": { 00:11:56.816 "trtype": "pcie", 00:11:56.816 "traddr": "0000:00:11.0", 00:11:56.816 "name": "Nvme1" 00:11:56.816 }, 00:11:56.816 "method": "bdev_nvme_attach_controller" 00:11:56.816 }, 00:11:56.816 { 00:11:56.816 "method": "bdev_wait_for_examine" 00:11:56.816 } 00:11:56.816 ] 00:11:56.816 } 00:11:56.816 ] 00:11:56.816 } 00:11:57.074 [2024-07-12 12:23:25.946884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.074 [2024-07-12 12:23:26.011052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.074 [2024-07-12 12:23:26.065915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:57.599  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:57.599 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:57.599 00:11:57.599 real 0m3.081s 00:11:57.599 user 0m2.190s 00:11:57.599 sys 0m0.960s 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:57.599 ************************************ 00:11:57.599 END TEST dd_offset_magic 00:11:57.599 ************************************ 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:57.599 12:23:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:57.599 [2024-07-12 12:23:26.556920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:57.599 [2024-07-12 12:23:26.557045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76030 ] 00:11:57.599 { 00:11:57.599 "subsystems": [ 00:11:57.599 { 00:11:57.599 "subsystem": "bdev", 00:11:57.599 "config": [ 00:11:57.599 { 00:11:57.599 "params": { 00:11:57.599 "trtype": "pcie", 00:11:57.599 "traddr": "0000:00:10.0", 00:11:57.599 "name": "Nvme0" 00:11:57.599 }, 00:11:57.599 "method": "bdev_nvme_attach_controller" 00:11:57.599 }, 00:11:57.599 { 00:11:57.599 "params": { 00:11:57.599 "trtype": "pcie", 00:11:57.599 "traddr": "0000:00:11.0", 00:11:57.599 "name": "Nvme1" 00:11:57.599 }, 00:11:57.599 "method": "bdev_nvme_attach_controller" 00:11:57.599 }, 00:11:57.599 { 00:11:57.599 "method": "bdev_wait_for_examine" 00:11:57.599 } 00:11:57.599 ] 00:11:57.599 } 00:11:57.599 ] 00:11:57.599 } 00:11:57.857 [2024-07-12 12:23:26.700185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.857 [2024-07-12 12:23:26.772929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.857 [2024-07-12 12:23:26.829613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:58.116  Copying: 5120/5120 [kB] (average 1250 MBps) 00:11:58.116 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:58.116 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:58.374 [2024-07-12 12:23:27.250055] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:58.375 [2024-07-12 12:23:27.250160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:11:58.375 { 00:11:58.375 "subsystems": [ 00:11:58.375 { 00:11:58.375 "subsystem": "bdev", 00:11:58.375 "config": [ 00:11:58.375 { 00:11:58.375 "params": { 00:11:58.375 "trtype": "pcie", 00:11:58.375 "traddr": "0000:00:10.0", 00:11:58.375 "name": "Nvme0" 00:11:58.375 }, 00:11:58.375 "method": "bdev_nvme_attach_controller" 00:11:58.375 }, 00:11:58.375 { 00:11:58.375 "params": { 00:11:58.375 "trtype": "pcie", 00:11:58.375 "traddr": "0000:00:11.0", 00:11:58.375 "name": "Nvme1" 00:11:58.375 }, 00:11:58.375 "method": "bdev_nvme_attach_controller" 00:11:58.375 }, 00:11:58.375 { 00:11:58.375 "method": "bdev_wait_for_examine" 00:11:58.375 } 00:11:58.375 ] 00:11:58.375 } 00:11:58.375 ] 00:11:58.375 } 00:11:58.375 [2024-07-12 12:23:27.387650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.375 [2024-07-12 12:23:27.444960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.633 [2024-07-12 12:23:27.501035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:58.891  Copying: 5120/5120 [kB] (average 833 MBps) 00:11:58.891 00:11:58.891 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:58.891 00:11:58.891 real 0m7.227s 00:11:58.891 user 0m5.270s 00:11:58.891 sys 0m3.444s 00:11:58.891 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:58.891 12:23:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:58.891 ************************************ 00:11:58.891 END TEST spdk_dd_bdev_to_bdev 00:11:58.891 ************************************ 00:11:58.891 12:23:27 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:58.891 12:23:27 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:58.891 12:23:27 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:58.891 12:23:27 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:58.891 12:23:27 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.891 12:23:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:58.891 ************************************ 00:11:58.891 START TEST spdk_dd_uring 00:11:58.891 ************************************ 00:11:58.891 12:23:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:59.150 * Looking for test storage... 00:11:59.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:59.150 ************************************ 00:11:59.150 START TEST dd_uring_copy 00:11:59.150 ************************************ 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=yexqyssti12xfnccepc2qj9mw85avhw4kubssrl1li9r7m84i0ib95sis1b9k06t2l3l59qszbclvnxzugrsjxhksfhzway0m2ijyj2jlyanj9gki63zzc1fc978riqq54s1aad2g4g3rbh13k2jdtwjkr3b72voyjjhsc66uluvoororwznln8bgmy9jd0i53ff99npn4hwfvab1pi0zwrem7hb9fya4049z2uh0wr8xunczz8hqe24tsfpoz4naf527ctzvbsj73nt1rwzggvamvgwxl7ln535d13dyy5e98ea16dq9g5z4jycadoldypoe5q6rcu5de3hvcp7qkef2k7nqght8q6rzaa5yin1fq1xxf9utu7fqlmk9fab6dpxhi5exbyog6qc18oxkz296vxr5cj8vvam1p88h3md6q52q1awzh54qiwff2139lb6vefy6xhr6j0pzrwugu8vt0ao5tryx4wryhb3bp7f71hu35svabepuacisc99luys8yfhxqhpmzwnf0ive3c9cj2ea2bbw6e7bxxmc5mu3oonhnd77yp73u6kwjy7i4dh4efewbp7oq9cv8090nnd7dn42j5degfh1zzi41axrwz5s099bmaes69zgqy19m420k3rwi6ave47ug930sbp235sa7fj1hu0rw58v9ycvtb8qgbtuqkrk35c71fv2qi9o25qkh34kcnij00l6edyqq63f4dfx30hwg49z80e1vpind50a4zhbjf4ctwblr67b01qz8tocpbbny7abamjgfaa1dskzs8o3hj3iai9e5vn1zidsa4wqk2sp15z0py7c7p90q7ueqeqoym6f8p24nk6buk3mt5fmft5a9q083r3urdtttlyyt5ptwxp8ocwphnc3db14vtmoerm43e9t67rvtrwi41iit2cq7j78wlwbuc28fg0qqkrha8v2oagy0uph7bpdw8zlx16lpqipb07mck416alxil6mf2vsiosxuzjpl28lk3w203l 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo yexqyssti12xfnccepc2qj9mw85avhw4kubssrl1li9r7m84i0ib95sis1b9k06t2l3l59qszbclvnxzugrsjxhksfhzway0m2ijyj2jlyanj9gki63zzc1fc978riqq54s1aad2g4g3rbh13k2jdtwjkr3b72voyjjhsc66uluvoororwznln8bgmy9jd0i53ff99npn4hwfvab1pi0zwrem7hb9fya4049z2uh0wr8xunczz8hqe24tsfpoz4naf527ctzvbsj73nt1rwzggvamvgwxl7ln535d13dyy5e98ea16dq9g5z4jycadoldypoe5q6rcu5de3hvcp7qkef2k7nqght8q6rzaa5yin1fq1xxf9utu7fqlmk9fab6dpxhi5exbyog6qc18oxkz296vxr5cj8vvam1p88h3md6q52q1awzh54qiwff2139lb6vefy6xhr6j0pzrwugu8vt0ao5tryx4wryhb3bp7f71hu35svabepuacisc99luys8yfhxqhpmzwnf0ive3c9cj2ea2bbw6e7bxxmc5mu3oonhnd77yp73u6kwjy7i4dh4efewbp7oq9cv8090nnd7dn42j5degfh1zzi41axrwz5s099bmaes69zgqy19m420k3rwi6ave47ug930sbp235sa7fj1hu0rw58v9ycvtb8qgbtuqkrk35c71fv2qi9o25qkh34kcnij00l6edyqq63f4dfx30hwg49z80e1vpind50a4zhbjf4ctwblr67b01qz8tocpbbny7abamjgfaa1dskzs8o3hj3iai9e5vn1zidsa4wqk2sp15z0py7c7p90q7ueqeqoym6f8p24nk6buk3mt5fmft5a9q083r3urdtttlyyt5ptwxp8ocwphnc3db14vtmoerm43e9t67rvtrwi41iit2cq7j78wlwbuc28fg0qqkrha8v2oagy0uph7bpdw8zlx16lpqipb07mck416alxil6mf2vsiosxuzjpl28lk3w203l 00:11:59.150 12:23:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:59.150 [2024-07-12 12:23:28.103288] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:59.150 [2024-07-12 12:23:28.103373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76121 ] 00:11:59.409 [2024-07-12 12:23:28.237503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.409 [2024-07-12 12:23:28.307870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.409 [2024-07-12 12:23:28.360834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:00.542  Copying: 511/511 [MB] (average 1044 MBps) 00:12:00.542 00:12:00.542 12:23:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:12:00.542 12:23:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:12:00.542 12:23:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:00.542 12:23:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:00.542 [2024-07-12 12:23:29.490486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:00.542 [2024-07-12 12:23:29.490594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76137 ] 00:12:00.542 { 00:12:00.542 "subsystems": [ 00:12:00.542 { 00:12:00.542 "subsystem": "bdev", 00:12:00.542 "config": [ 00:12:00.542 { 00:12:00.542 "params": { 00:12:00.542 "block_size": 512, 00:12:00.542 "num_blocks": 1048576, 00:12:00.542 "name": "malloc0" 00:12:00.542 }, 00:12:00.542 "method": "bdev_malloc_create" 00:12:00.542 }, 00:12:00.542 { 00:12:00.542 "params": { 00:12:00.542 "filename": "/dev/zram1", 00:12:00.542 "name": "uring0" 00:12:00.542 }, 00:12:00.542 "method": "bdev_uring_create" 00:12:00.542 }, 00:12:00.542 { 00:12:00.542 "method": "bdev_wait_for_examine" 00:12:00.542 } 00:12:00.542 ] 00:12:00.542 } 00:12:00.542 ] 00:12:00.542 } 00:12:00.542 [2024-07-12 12:23:29.622545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.799 [2024-07-12 12:23:29.694453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.799 [2024-07-12 12:23:29.750963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:03.681  Copying: 209/512 [MB] (209 MBps) Copying: 424/512 [MB] (214 MBps) Copying: 512/512 [MB] (average 212 MBps) 00:12:03.681 00:12:03.939 12:23:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:12:03.939 12:23:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:12:03.939 12:23:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:03.939 12:23:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 [2024-07-12 12:23:32.813777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:03.939 [2024-07-12 12:23:32.813902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76186 ] 00:12:03.939 { 00:12:03.939 "subsystems": [ 00:12:03.939 { 00:12:03.939 "subsystem": "bdev", 00:12:03.939 "config": [ 00:12:03.939 { 00:12:03.939 "params": { 00:12:03.939 "block_size": 512, 00:12:03.939 "num_blocks": 1048576, 00:12:03.939 "name": "malloc0" 00:12:03.939 }, 00:12:03.939 "method": "bdev_malloc_create" 00:12:03.939 }, 00:12:03.939 { 00:12:03.939 "params": { 00:12:03.939 "filename": "/dev/zram1", 00:12:03.939 "name": "uring0" 00:12:03.939 }, 00:12:03.939 "method": "bdev_uring_create" 00:12:03.939 }, 00:12:03.939 { 00:12:03.939 "method": "bdev_wait_for_examine" 00:12:03.939 } 00:12:03.939 ] 00:12:03.939 } 00:12:03.939 ] 00:12:03.939 } 00:12:03.939 [2024-07-12 12:23:32.955064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.197 [2024-07-12 12:23:33.036488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.197 [2024-07-12 12:23:33.095948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:07.695  Copying: 174/512 [MB] (174 MBps) Copying: 333/512 [MB] (158 MBps) Copying: 506/512 [MB] (173 MBps) Copying: 512/512 [MB] (average 168 MBps) 00:12:07.695 00:12:07.695 12:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:12:07.696 12:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ yexqyssti12xfnccepc2qj9mw85avhw4kubssrl1li9r7m84i0ib95sis1b9k06t2l3l59qszbclvnxzugrsjxhksfhzway0m2ijyj2jlyanj9gki63zzc1fc978riqq54s1aad2g4g3rbh13k2jdtwjkr3b72voyjjhsc66uluvoororwznln8bgmy9jd0i53ff99npn4hwfvab1pi0zwrem7hb9fya4049z2uh0wr8xunczz8hqe24tsfpoz4naf527ctzvbsj73nt1rwzggvamvgwxl7ln535d13dyy5e98ea16dq9g5z4jycadoldypoe5q6rcu5de3hvcp7qkef2k7nqght8q6rzaa5yin1fq1xxf9utu7fqlmk9fab6dpxhi5exbyog6qc18oxkz296vxr5cj8vvam1p88h3md6q52q1awzh54qiwff2139lb6vefy6xhr6j0pzrwugu8vt0ao5tryx4wryhb3bp7f71hu35svabepuacisc99luys8yfhxqhpmzwnf0ive3c9cj2ea2bbw6e7bxxmc5mu3oonhnd77yp73u6kwjy7i4dh4efewbp7oq9cv8090nnd7dn42j5degfh1zzi41axrwz5s099bmaes69zgqy19m420k3rwi6ave47ug930sbp235sa7fj1hu0rw58v9ycvtb8qgbtuqkrk35c71fv2qi9o25qkh34kcnij00l6edyqq63f4dfx30hwg49z80e1vpind50a4zhbjf4ctwblr67b01qz8tocpbbny7abamjgfaa1dskzs8o3hj3iai9e5vn1zidsa4wqk2sp15z0py7c7p90q7ueqeqoym6f8p24nk6buk3mt5fmft5a9q083r3urdtttlyyt5ptwxp8ocwphnc3db14vtmoerm43e9t67rvtrwi41iit2cq7j78wlwbuc28fg0qqkrha8v2oagy0uph7bpdw8zlx16lpqipb07mck416alxil6mf2vsiosxuzjpl28lk3w203l == \y\e\x\q\y\s\s\t\i\1\2\x\f\n\c\c\e\p\c\2\q\j\9\m\w\8\5\a\v\h\w\4\k\u\b\s\s\r\l\1\l\i\9\r\7\m\8\4\i\0\i\b\9\5\s\i\s\1\b\9\k\0\6\t\2\l\3\l\5\9\q\s\z\b\c\l\v\n\x\z\u\g\r\s\j\x\h\k\s\f\h\z\w\a\y\0\m\2\i\j\y\j\2\j\l\y\a\n\j\9\g\k\i\6\3\z\z\c\1\f\c\9\7\8\r\i\q\q\5\4\s\1\a\a\d\2\g\4\g\3\r\b\h\1\3\k\2\j\d\t\w\j\k\r\3\b\7\2\v\o\y\j\j\h\s\c\6\6\u\l\u\v\o\o\r\o\r\w\z\n\l\n\8\b\g\m\y\9\j\d\0\i\5\3\f\f\9\9\n\p\n\4\h\w\f\v\a\b\1\p\i\0\z\w\r\e\m\7\h\b\9\f\y\a\4\0\4\9\z\2\u\h\0\w\r\8\x\u\n\c\z\z\8\h\q\e\2\4\t\s\f\p\o\z\4\n\a\f\5\2\7\c\t\z\v\b\s\j\7\3\n\t\1\r\w\z\g\g\v\a\m\v\g\w\x\l\7\l\n\5\3\5\d\1\3\d\y\y\5\e\9\8\e\a\1\6\d\q\9\g\5\z\4\j\y\c\a\d\o\l\d\y\p\o\e\5\q\6\r\c\u\5\d\e\3\h\v\c\p\7\q\k\e\f\2\k\7\n\q\g\h\t\8\q\6\r\z\a\a\5\y\i\n\1\f\q\1\x\x\f\9\u\t\u\7\f\q\l\m\k\9\f\a\b\6\d\p\x\h\i\5\e\x\b\y\o\g\6\q\c\1\8\o\x\k\z\2\9\6\v\x\r\5\c\j\8\v\v\a\m\1\p\8\8\h\3\m\d\6\q\5\2\q\1\a\w\z\h\5\4\q\i\w\f\f\2\1\3\9\l\b\6\v\e\f\y\6\x\h\r\6\j\0\p\z\r\w\u\g\u\8\v\t\0\a\o\5\t\r\y\x\4\w\r\y\h\b\3\b\p\7\f\7\1\h\u\3\5\s\v\a\b\e\p\u\a\c\i\s\c\9\9\l\u\y\s\8\y\f\h\x\q\h\p\m\z\w\n\f\0\i\v\e\3\c\9\c\j\2\e\a\2\b\b\w\6\e\7\b\x\x\m\c\5\m\u\3\o\o\n\h\n\d\7\7\y\p\7\3\u\6\k\w\j\y\7\i\4\d\h\4\e\f\e\w\b\p\7\o\q\9\c\v\8\0\9\0\n\n\d\7\d\n\4\2\j\5\d\e\g\f\h\1\z\z\i\4\1\a\x\r\w\z\5\s\0\9\9\b\m\a\e\s\6\9\z\g\q\y\1\9\m\4\2\0\k\3\r\w\i\6\a\v\e\4\7\u\g\9\3\0\s\b\p\2\3\5\s\a\7\f\j\1\h\u\0\r\w\5\8\v\9\y\c\v\t\b\8\q\g\b\t\u\q\k\r\k\3\5\c\7\1\f\v\2\q\i\9\o\2\5\q\k\h\3\4\k\c\n\i\j\0\0\l\6\e\d\y\q\q\6\3\f\4\d\f\x\3\0\h\w\g\4\9\z\8\0\e\1\v\p\i\n\d\5\0\a\4\z\h\b\j\f\4\c\t\w\b\l\r\6\7\b\0\1\q\z\8\t\o\c\p\b\b\n\y\7\a\b\a\m\j\g\f\a\a\1\d\s\k\z\s\8\o\3\h\j\3\i\a\i\9\e\5\v\n\1\z\i\d\s\a\4\w\q\k\2\s\p\1\5\z\0\p\y\7\c\7\p\9\0\q\7\u\e\q\e\q\o\y\m\6\f\8\p\2\4\n\k\6\b\u\k\3\m\t\5\f\m\f\t\5\a\9\q\0\8\3\r\3\u\r\d\t\t\t\l\y\y\t\5\p\t\w\x\p\8\o\c\w\p\h\n\c\3\d\b\1\4\v\t\m\o\e\r\m\4\3\e\9\t\6\7\r\v\t\r\w\i\4\1\i\i\t\2\c\q\7\j\7\8\w\l\w\b\u\c\2\8\f\g\0\q\q\k\r\h\a\8\v\2\o\a\g\y\0\u\p\h\7\b\p\d\w\8\z\l\x\1\6\l\p\q\i\p\b\0\7\m\c\k\4\1\6\a\l\x\i\l\6\m\f\2\v\s\i\o\s\x\u\z\j\p\l\2\8\l\k\3\w\2\0\3\l ]] 00:12:07.696 12:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:12:07.696 12:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ yexqyssti12xfnccepc2qj9mw85avhw4kubssrl1li9r7m84i0ib95sis1b9k06t2l3l59qszbclvnxzugrsjxhksfhzway0m2ijyj2jlyanj9gki63zzc1fc978riqq54s1aad2g4g3rbh13k2jdtwjkr3b72voyjjhsc66uluvoororwznln8bgmy9jd0i53ff99npn4hwfvab1pi0zwrem7hb9fya4049z2uh0wr8xunczz8hqe24tsfpoz4naf527ctzvbsj73nt1rwzggvamvgwxl7ln535d13dyy5e98ea16dq9g5z4jycadoldypoe5q6rcu5de3hvcp7qkef2k7nqght8q6rzaa5yin1fq1xxf9utu7fqlmk9fab6dpxhi5exbyog6qc18oxkz296vxr5cj8vvam1p88h3md6q52q1awzh54qiwff2139lb6vefy6xhr6j0pzrwugu8vt0ao5tryx4wryhb3bp7f71hu35svabepuacisc99luys8yfhxqhpmzwnf0ive3c9cj2ea2bbw6e7bxxmc5mu3oonhnd77yp73u6kwjy7i4dh4efewbp7oq9cv8090nnd7dn42j5degfh1zzi41axrwz5s099bmaes69zgqy19m420k3rwi6ave47ug930sbp235sa7fj1hu0rw58v9ycvtb8qgbtuqkrk35c71fv2qi9o25qkh34kcnij00l6edyqq63f4dfx30hwg49z80e1vpind50a4zhbjf4ctwblr67b01qz8tocpbbny7abamjgfaa1dskzs8o3hj3iai9e5vn1zidsa4wqk2sp15z0py7c7p90q7ueqeqoym6f8p24nk6buk3mt5fmft5a9q083r3urdtttlyyt5ptwxp8ocwphnc3db14vtmoerm43e9t67rvtrwi41iit2cq7j78wlwbuc28fg0qqkrha8v2oagy0uph7bpdw8zlx16lpqipb07mck416alxil6mf2vsiosxuzjpl28lk3w203l == \y\e\x\q\y\s\s\t\i\1\2\x\f\n\c\c\e\p\c\2\q\j\9\m\w\8\5\a\v\h\w\4\k\u\b\s\s\r\l\1\l\i\9\r\7\m\8\4\i\0\i\b\9\5\s\i\s\1\b\9\k\0\6\t\2\l\3\l\5\9\q\s\z\b\c\l\v\n\x\z\u\g\r\s\j\x\h\k\s\f\h\z\w\a\y\0\m\2\i\j\y\j\2\j\l\y\a\n\j\9\g\k\i\6\3\z\z\c\1\f\c\9\7\8\r\i\q\q\5\4\s\1\a\a\d\2\g\4\g\3\r\b\h\1\3\k\2\j\d\t\w\j\k\r\3\b\7\2\v\o\y\j\j\h\s\c\6\6\u\l\u\v\o\o\r\o\r\w\z\n\l\n\8\b\g\m\y\9\j\d\0\i\5\3\f\f\9\9\n\p\n\4\h\w\f\v\a\b\1\p\i\0\z\w\r\e\m\7\h\b\9\f\y\a\4\0\4\9\z\2\u\h\0\w\r\8\x\u\n\c\z\z\8\h\q\e\2\4\t\s\f\p\o\z\4\n\a\f\5\2\7\c\t\z\v\b\s\j\7\3\n\t\1\r\w\z\g\g\v\a\m\v\g\w\x\l\7\l\n\5\3\5\d\1\3\d\y\y\5\e\9\8\e\a\1\6\d\q\9\g\5\z\4\j\y\c\a\d\o\l\d\y\p\o\e\5\q\6\r\c\u\5\d\e\3\h\v\c\p\7\q\k\e\f\2\k\7\n\q\g\h\t\8\q\6\r\z\a\a\5\y\i\n\1\f\q\1\x\x\f\9\u\t\u\7\f\q\l\m\k\9\f\a\b\6\d\p\x\h\i\5\e\x\b\y\o\g\6\q\c\1\8\o\x\k\z\2\9\6\v\x\r\5\c\j\8\v\v\a\m\1\p\8\8\h\3\m\d\6\q\5\2\q\1\a\w\z\h\5\4\q\i\w\f\f\2\1\3\9\l\b\6\v\e\f\y\6\x\h\r\6\j\0\p\z\r\w\u\g\u\8\v\t\0\a\o\5\t\r\y\x\4\w\r\y\h\b\3\b\p\7\f\7\1\h\u\3\5\s\v\a\b\e\p\u\a\c\i\s\c\9\9\l\u\y\s\8\y\f\h\x\q\h\p\m\z\w\n\f\0\i\v\e\3\c\9\c\j\2\e\a\2\b\b\w\6\e\7\b\x\x\m\c\5\m\u\3\o\o\n\h\n\d\7\7\y\p\7\3\u\6\k\w\j\y\7\i\4\d\h\4\e\f\e\w\b\p\7\o\q\9\c\v\8\0\9\0\n\n\d\7\d\n\4\2\j\5\d\e\g\f\h\1\z\z\i\4\1\a\x\r\w\z\5\s\0\9\9\b\m\a\e\s\6\9\z\g\q\y\1\9\m\4\2\0\k\3\r\w\i\6\a\v\e\4\7\u\g\9\3\0\s\b\p\2\3\5\s\a\7\f\j\1\h\u\0\r\w\5\8\v\9\y\c\v\t\b\8\q\g\b\t\u\q\k\r\k\3\5\c\7\1\f\v\2\q\i\9\o\2\5\q\k\h\3\4\k\c\n\i\j\0\0\l\6\e\d\y\q\q\6\3\f\4\d\f\x\3\0\h\w\g\4\9\z\8\0\e\1\v\p\i\n\d\5\0\a\4\z\h\b\j\f\4\c\t\w\b\l\r\6\7\b\0\1\q\z\8\t\o\c\p\b\b\n\y\7\a\b\a\m\j\g\f\a\a\1\d\s\k\z\s\8\o\3\h\j\3\i\a\i\9\e\5\v\n\1\z\i\d\s\a\4\w\q\k\2\s\p\1\5\z\0\p\y\7\c\7\p\9\0\q\7\u\e\q\e\q\o\y\m\6\f\8\p\2\4\n\k\6\b\u\k\3\m\t\5\f\m\f\t\5\a\9\q\0\8\3\r\3\u\r\d\t\t\t\l\y\y\t\5\p\t\w\x\p\8\o\c\w\p\h\n\c\3\d\b\1\4\v\t\m\o\e\r\m\4\3\e\9\t\6\7\r\v\t\r\w\i\4\1\i\i\t\2\c\q\7\j\7\8\w\l\w\b\u\c\2\8\f\g\0\q\q\k\r\h\a\8\v\2\o\a\g\y\0\u\p\h\7\b\p\d\w\8\z\l\x\1\6\l\p\q\i\p\b\0\7\m\c\k\4\1\6\a\l\x\i\l\6\m\f\2\v\s\i\o\s\x\u\z\j\p\l\2\8\l\k\3\w\2\0\3\l ]] 00:12:07.696 12:23:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:08.261 12:23:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:12:08.261 12:23:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:12:08.261 12:23:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:08.261 12:23:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:08.261 [2024-07-12 12:23:37.161684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:08.261 [2024-07-12 12:23:37.161778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76251 ] 00:12:08.261 { 00:12:08.261 "subsystems": [ 00:12:08.261 { 00:12:08.261 "subsystem": "bdev", 00:12:08.261 "config": [ 00:12:08.261 { 00:12:08.261 "params": { 00:12:08.261 "block_size": 512, 00:12:08.261 "num_blocks": 1048576, 00:12:08.261 "name": "malloc0" 00:12:08.261 }, 00:12:08.261 "method": "bdev_malloc_create" 00:12:08.261 }, 00:12:08.261 { 00:12:08.261 "params": { 00:12:08.261 "filename": "/dev/zram1", 00:12:08.261 "name": "uring0" 00:12:08.261 }, 00:12:08.261 "method": "bdev_uring_create" 00:12:08.261 }, 00:12:08.261 { 00:12:08.261 "method": "bdev_wait_for_examine" 00:12:08.261 } 00:12:08.261 ] 00:12:08.261 } 00:12:08.261 ] 00:12:08.261 } 00:12:08.261 [2024-07-12 12:23:37.296875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.519 [2024-07-12 12:23:37.383143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.519 [2024-07-12 12:23:37.436887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:12.324  Copying: 161/512 [MB] (161 MBps) Copying: 320/512 [MB] (159 MBps) Copying: 481/512 [MB] (160 MBps) Copying: 512/512 [MB] (average 160 MBps) 00:12:12.324 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.324 12:23:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.324 [2024-07-12 12:23:41.268665] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:12.324 [2024-07-12 12:23:41.268763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76307 ] 00:12:12.324 { 00:12:12.324 "subsystems": [ 00:12:12.324 { 00:12:12.324 "subsystem": "bdev", 00:12:12.324 "config": [ 00:12:12.324 { 00:12:12.324 "params": { 00:12:12.324 "block_size": 512, 00:12:12.324 "num_blocks": 1048576, 00:12:12.324 "name": "malloc0" 00:12:12.324 }, 00:12:12.324 "method": "bdev_malloc_create" 00:12:12.324 }, 00:12:12.324 { 00:12:12.324 "params": { 00:12:12.324 "filename": "/dev/zram1", 00:12:12.324 "name": "uring0" 00:12:12.324 }, 00:12:12.324 "method": "bdev_uring_create" 00:12:12.324 }, 00:12:12.324 { 00:12:12.324 "params": { 00:12:12.324 "name": "uring0" 00:12:12.324 }, 00:12:12.324 "method": "bdev_uring_delete" 00:12:12.324 }, 00:12:12.324 { 00:12:12.324 "method": "bdev_wait_for_examine" 00:12:12.324 } 00:12:12.324 ] 00:12:12.324 } 00:12:12.324 ] 00:12:12.324 } 00:12:12.324 [2024-07-12 12:23:41.404427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.582 [2024-07-12 12:23:41.475477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.582 [2024-07-12 12:23:41.529391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:13.098  Copying: 0/0 [B] (average 0 Bps) 00:12:13.099 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:13.099 12:23:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:13.099 [2024-07-12 12:23:42.171068] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:13.099 [2024-07-12 12:23:42.171180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76337 ] 00:12:13.099 { 00:12:13.099 "subsystems": [ 00:12:13.099 { 00:12:13.099 "subsystem": "bdev", 00:12:13.099 "config": [ 00:12:13.099 { 00:12:13.099 "params": { 00:12:13.099 "block_size": 512, 00:12:13.099 "num_blocks": 1048576, 00:12:13.099 "name": "malloc0" 00:12:13.099 }, 00:12:13.099 "method": "bdev_malloc_create" 00:12:13.099 }, 00:12:13.099 { 00:12:13.099 "params": { 00:12:13.099 "filename": "/dev/zram1", 00:12:13.099 "name": "uring0" 00:12:13.099 }, 00:12:13.099 "method": "bdev_uring_create" 00:12:13.099 }, 00:12:13.099 { 00:12:13.099 "params": { 00:12:13.099 "name": "uring0" 00:12:13.099 }, 00:12:13.099 "method": "bdev_uring_delete" 00:12:13.099 }, 00:12:13.099 { 00:12:13.099 "method": "bdev_wait_for_examine" 00:12:13.099 } 00:12:13.099 ] 00:12:13.099 } 00:12:13.099 ] 00:12:13.099 } 00:12:13.356 [2024-07-12 12:23:42.308541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.356 [2024-07-12 12:23:42.373178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.356 [2024-07-12 12:23:42.427580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:13.627 [2024-07-12 12:23:42.626253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:13.627 [2024-07-12 12:23:42.626322] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:13.627 [2024-07-12 12:23:42.626349] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:12:13.627 [2024-07-12 12:23:42.626359] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:13.885 [2024-07-12 12:23:42.937692] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:14.143 00:12:14.143 real 0m15.160s 00:12:14.143 user 0m10.296s 00:12:14.143 sys 0m12.685s 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.143 12:23:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:14.143 ************************************ 00:12:14.143 END TEST dd_uring_copy 00:12:14.143 ************************************ 00:12:14.400 12:23:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:12:14.400 ************************************ 00:12:14.400 END TEST spdk_dd_uring 00:12:14.400 ************************************ 00:12:14.400 00:12:14.400 real 0m15.305s 00:12:14.400 user 0m10.358s 00:12:14.400 sys 0m12.768s 00:12:14.400 12:23:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.400 12:23:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:12:14.400 12:23:43 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:12:14.400 12:23:43 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:14.400 12:23:43 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:14.400 12:23:43 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.400 12:23:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:14.400 ************************************ 00:12:14.400 START TEST spdk_dd_sparse 00:12:14.400 ************************************ 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:14.400 * Looking for test storage... 00:12:14.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:14.400 1+0 records in 00:12:14.400 1+0 records out 00:12:14.400 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00702308 s, 597 MB/s 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:14.400 1+0 records in 00:12:14.400 1+0 records out 00:12:14.400 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00720424 s, 582 MB/s 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:14.400 1+0 records in 00:12:14.400 1+0 records out 00:12:14.400 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00387482 s, 1.1 GB/s 00:12:14.400 12:23:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:14.401 ************************************ 00:12:14.401 START TEST dd_sparse_file_to_file 00:12:14.401 ************************************ 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:14.401 12:23:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:14.658 [2024-07-12 12:23:43.488121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:14.658 [2024-07-12 12:23:43.488222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:12:14.658 { 00:12:14.658 "subsystems": [ 00:12:14.658 { 00:12:14.658 "subsystem": "bdev", 00:12:14.658 "config": [ 00:12:14.658 { 00:12:14.658 "params": { 00:12:14.658 "block_size": 4096, 00:12:14.658 "filename": "dd_sparse_aio_disk", 00:12:14.658 "name": "dd_aio" 00:12:14.658 }, 00:12:14.658 "method": "bdev_aio_create" 00:12:14.658 }, 00:12:14.658 { 00:12:14.658 "params": { 00:12:14.658 "lvs_name": "dd_lvstore", 00:12:14.658 "bdev_name": "dd_aio" 00:12:14.658 }, 00:12:14.658 "method": "bdev_lvol_create_lvstore" 00:12:14.658 }, 00:12:14.658 { 00:12:14.658 "method": "bdev_wait_for_examine" 00:12:14.658 } 00:12:14.658 ] 00:12:14.658 } 00:12:14.658 ] 00:12:14.658 } 00:12:14.658 [2024-07-12 12:23:43.625510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.658 [2024-07-12 12:23:43.696093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.916 [2024-07-12 12:23:43.751084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:15.175  Copying: 12/36 [MB] (average 1000 MBps) 00:12:15.175 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:15.175 00:12:15.175 real 0m0.660s 00:12:15.175 user 0m0.382s 00:12:15.175 sys 0m0.369s 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.175 ************************************ 00:12:15.175 END TEST dd_sparse_file_to_file 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:15.175 ************************************ 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:15.175 ************************************ 00:12:15.175 START TEST dd_sparse_file_to_bdev 00:12:15.175 ************************************ 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:15.175 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:15.175 [2024-07-12 12:23:44.196144] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:15.175 [2024-07-12 12:23:44.196237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76470 ] 00:12:15.175 { 00:12:15.175 "subsystems": [ 00:12:15.175 { 00:12:15.175 "subsystem": "bdev", 00:12:15.175 "config": [ 00:12:15.175 { 00:12:15.175 "params": { 00:12:15.175 "block_size": 4096, 00:12:15.175 "filename": "dd_sparse_aio_disk", 00:12:15.175 "name": "dd_aio" 00:12:15.175 }, 00:12:15.175 "method": "bdev_aio_create" 00:12:15.175 }, 00:12:15.175 { 00:12:15.175 "params": { 00:12:15.175 "lvs_name": "dd_lvstore", 00:12:15.175 "lvol_name": "dd_lvol", 00:12:15.175 "size_in_mib": 36, 00:12:15.175 "thin_provision": true 00:12:15.175 }, 00:12:15.175 "method": "bdev_lvol_create" 00:12:15.175 }, 00:12:15.175 { 00:12:15.175 "method": "bdev_wait_for_examine" 00:12:15.175 } 00:12:15.175 ] 00:12:15.175 } 00:12:15.175 ] 00:12:15.175 } 00:12:15.434 [2024-07-12 12:23:44.333550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.434 [2024-07-12 12:23:44.410240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.434 [2024-07-12 12:23:44.464219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:15.692  Copying: 12/36 [MB] (average 545 MBps) 00:12:15.692 00:12:15.692 00:12:15.692 real 0m0.620s 00:12:15.692 user 0m0.396s 00:12:15.692 sys 0m0.329s 00:12:15.692 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.692 ************************************ 00:12:15.692 END TEST dd_sparse_file_to_bdev 00:12:15.692 ************************************ 00:12:15.692 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:15.950 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:12:15.950 12:23:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:12:15.950 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:15.951 ************************************ 00:12:15.951 START TEST dd_sparse_bdev_to_file 00:12:15.951 ************************************ 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:15.951 12:23:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:15.951 [2024-07-12 12:23:44.865831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:15.951 [2024-07-12 12:23:44.865928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76497 ] 00:12:15.951 { 00:12:15.951 "subsystems": [ 00:12:15.951 { 00:12:15.951 "subsystem": "bdev", 00:12:15.951 "config": [ 00:12:15.951 { 00:12:15.951 "params": { 00:12:15.951 "block_size": 4096, 00:12:15.951 "filename": "dd_sparse_aio_disk", 00:12:15.951 "name": "dd_aio" 00:12:15.951 }, 00:12:15.951 "method": "bdev_aio_create" 00:12:15.951 }, 00:12:15.951 { 00:12:15.951 "method": "bdev_wait_for_examine" 00:12:15.951 } 00:12:15.951 ] 00:12:15.951 } 00:12:15.951 ] 00:12:15.951 } 00:12:15.951 [2024-07-12 12:23:45.005310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.209 [2024-07-12 12:23:45.081064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.209 [2024-07-12 12:23:45.135755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:16.467  Copying: 12/36 [MB] (average 705 MBps) 00:12:16.467 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:16.467 00:12:16.467 real 0m0.644s 00:12:16.467 user 0m0.398s 00:12:16.467 sys 0m0.347s 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.467 ************************************ 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:16.467 END TEST dd_sparse_bdev_to_file 00:12:16.467 ************************************ 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:12:16.467 00:12:16.467 real 0m2.216s 00:12:16.467 user 0m1.261s 00:12:16.467 sys 0m1.242s 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.467 ************************************ 00:12:16.467 END TEST spdk_dd_sparse 00:12:16.467 ************************************ 00:12:16.467 12:23:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:16.727 12:23:45 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:12:16.727 12:23:45 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:16.727 12:23:45 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:16.727 12:23:45 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.727 12:23:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:16.727 ************************************ 00:12:16.727 START TEST spdk_dd_negative 00:12:16.727 ************************************ 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:16.727 * Looking for test storage... 00:12:16.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:16.727 ************************************ 00:12:16.727 START TEST dd_invalid_arguments 00:12:16.727 ************************************ 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:16.727 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:16.727 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:12:16.727 00:12:16.727 CPU options: 00:12:16.727 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:12:16.727 (like [0,1,10]) 00:12:16.727 --lcores lcore to CPU mapping list. The list is in the format: 00:12:16.727 [<,lcores[@CPUs]>...] 00:12:16.727 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:12:16.727 Within the group, '-' is used for range separator, 00:12:16.727 ',' is used for single number separator. 00:12:16.727 '( )' can be omitted for single element group, 00:12:16.727 '@' can be omitted if cpus and lcores have the same value 00:12:16.727 --disable-cpumask-locks Disable CPU core lock files. 00:12:16.727 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:12:16.727 pollers in the app support interrupt mode) 00:12:16.727 -p, --main-core main (primary) core for DPDK 00:12:16.727 00:12:16.727 Configuration options: 00:12:16.727 -c, --config, --json JSON config file 00:12:16.727 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:12:16.727 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:12:16.727 --wait-for-rpc wait for RPCs to initialize subsystems 00:12:16.727 --rpcs-allowed comma-separated list of permitted RPCS 00:12:16.727 --json-ignore-init-errors don't exit on invalid config entry 00:12:16.727 00:12:16.727 Memory options: 00:12:16.727 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:12:16.727 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:12:16.728 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:12:16.728 -R, --huge-unlink unlink huge files after initialization 00:12:16.728 -n, --mem-channels number of memory channels used for DPDK 00:12:16.728 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:12:16.728 --msg-mempool-size global message memory pool size in count (default: 262143) 00:12:16.728 --no-huge run without using hugepages 00:12:16.728 -i, --shm-id shared memory ID (optional) 00:12:16.728 -g, --single-file-segments force creating just one hugetlbfs file 00:12:16.728 00:12:16.728 PCI options: 00:12:16.728 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:12:16.728 -B, --pci-blocked pci addr to block (can be used more than once) 00:12:16.728 -u, --no-pci disable PCI access 00:12:16.728 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:12:16.728 00:12:16.728 Log options: 00:12:16.728 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:12:16.728 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:12:16.728 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:12:16.728 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:12:16.728 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:12:16.728 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:12:16.728 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:12:16.728 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:12:16.728 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:12:16.728 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:12:16.728 virtio_vfio_user, vmd) 00:12:16.728 --silence-noticelog disable notice level logging to stderr 00:12:16.728 00:12:16.728 Trace options: 00:12:16.728 --num-trace-entries number of trace entries for each core, must be power of 2, 00:12:16.728 setting 0 to disable trace (default 32768) 00:12:16.728 Tracepoints vary in size and can use more than one trace entry. 00:12:16.728 -e, --tpoint-group [:] 00:12:16.728 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:12:16.728 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:12:16.728 [2024-07-12 12:23:45.723434] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:12:16.728 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:12:16.728 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:12:16.728 a tracepoint group. First tpoint inside a group can be enabled by 00:12:16.728 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:12:16.728 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:12:16.728 in /include/spdk_internal/trace_defs.h 00:12:16.728 00:12:16.728 Other options: 00:12:16.728 -h, --help show this usage 00:12:16.728 -v, --version print SPDK version 00:12:16.728 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:12:16.728 --env-context Opaque context for use of the env implementation 00:12:16.728 00:12:16.728 Application specific: 00:12:16.728 [--------- DD Options ---------] 00:12:16.728 --if Input file. Must specify either --if or --ib. 00:12:16.728 --ib Input bdev. Must specifier either --if or --ib 00:12:16.728 --of Output file. Must specify either --of or --ob. 00:12:16.728 --ob Output bdev. Must specify either --of or --ob. 00:12:16.728 --iflag Input file flags. 00:12:16.728 --oflag Output file flags. 00:12:16.728 --bs I/O unit size (default: 4096) 00:12:16.728 --qd Queue depth (default: 2) 00:12:16.728 --count I/O unit count. The number of I/O units to copy. (default: all) 00:12:16.728 --skip Skip this many I/O units at start of input. (default: 0) 00:12:16.728 --seek Skip this many I/O units at start of output. (default: 0) 00:12:16.728 --aio Force usage of AIO. (by default io_uring is used if available) 00:12:16.728 --sparse Enable hole skipping in input target 00:12:16.728 Available iflag and oflag values: 00:12:16.728 append - append mode 00:12:16.728 direct - use direct I/O for data 00:12:16.728 directory - fail unless a directory 00:12:16.728 dsync - use synchronized I/O for data 00:12:16.728 noatime - do not update access time 00:12:16.728 noctty - do not assign controlling terminal from file 00:12:16.728 nofollow - do not follow symlinks 00:12:16.728 nonblock - use non-blocking I/O 00:12:16.728 sync - use synchronized I/O for data and metadata 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.728 00:12:16.728 real 0m0.062s 00:12:16.728 user 0m0.034s 00:12:16.728 sys 0m0.028s 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:12:16.728 ************************************ 00:12:16.728 END TEST dd_invalid_arguments 00:12:16.728 ************************************ 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:16.728 ************************************ 00:12:16.728 START TEST dd_double_input 00:12:16.728 ************************************ 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:16.728 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:16.986 [2024-07-12 12:23:45.833715] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.986 00:12:16.986 real 0m0.059s 00:12:16.986 user 0m0.033s 00:12:16.986 sys 0m0.025s 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:12:16.986 ************************************ 00:12:16.986 END TEST dd_double_input 00:12:16.986 ************************************ 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:16.986 ************************************ 00:12:16.986 START TEST dd_double_output 00:12:16.986 ************************************ 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.986 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:16.987 [2024-07-12 12:23:45.954094] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.987 00:12:16.987 real 0m0.071s 00:12:16.987 user 0m0.047s 00:12:16.987 sys 0m0.023s 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.987 12:23:45 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:12:16.987 ************************************ 00:12:16.987 END TEST dd_double_output 00:12:16.987 ************************************ 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:16.987 ************************************ 00:12:16.987 START TEST dd_no_input 00:12:16.987 ************************************ 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:16.987 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:17.245 [2024-07-12 12:23:46.069933] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:17.245 00:12:17.245 real 0m0.069s 00:12:17.245 user 0m0.043s 00:12:17.245 sys 0m0.025s 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:12:17.245 ************************************ 00:12:17.245 END TEST dd_no_input 00:12:17.245 ************************************ 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:17.245 ************************************ 00:12:17.245 START TEST dd_no_output 00:12:17.245 ************************************ 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:17.245 [2024-07-12 12:23:46.176707] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:17.245 00:12:17.245 real 0m0.056s 00:12:17.245 user 0m0.036s 00:12:17.245 sys 0m0.020s 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.245 ************************************ 00:12:17.245 12:23:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:12:17.245 END TEST dd_no_output 00:12:17.246 ************************************ 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:17.246 ************************************ 00:12:17.246 START TEST dd_wrong_blocksize 00:12:17.246 ************************************ 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:17.246 [2024-07-12 12:23:46.287965] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:17.246 00:12:17.246 real 0m0.059s 00:12:17.246 user 0m0.037s 00:12:17.246 sys 0m0.021s 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.246 12:23:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:17.246 ************************************ 00:12:17.246 END TEST dd_wrong_blocksize 00:12:17.246 ************************************ 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:17.504 ************************************ 00:12:17.504 START TEST dd_smaller_blocksize 00:12:17.504 ************************************ 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:17.504 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:17.504 [2024-07-12 12:23:46.404959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:17.504 [2024-07-12 12:23:46.405058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76721 ] 00:12:17.504 [2024-07-12 12:23:46.545951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.763 [2024-07-12 12:23:46.638508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.763 [2024-07-12 12:23:46.697964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:17.763 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:17.763 [2024-07-12 12:23:46.729411] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:12:17.763 [2024-07-12 12:23:46.729446] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:17.763 [2024-07-12 12:23:46.843247] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.021 00:12:18.021 real 0m0.568s 00:12:18.021 user 0m0.303s 00:12:18.021 sys 0m0.159s 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:18.021 ************************************ 00:12:18.021 END TEST dd_smaller_blocksize 00:12:18.021 ************************************ 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.021 ************************************ 00:12:18.021 START TEST dd_invalid_count 00:12:18.021 ************************************ 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.021 12:23:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:18.021 [2024-07-12 12:23:47.036212] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:18.021 ************************************ 00:12:18.021 END TEST dd_invalid_count 00:12:18.022 ************************************ 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.022 00:12:18.022 real 0m0.078s 00:12:18.022 user 0m0.048s 00:12:18.022 sys 0m0.029s 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.022 ************************************ 00:12:18.022 START TEST dd_invalid_oflag 00:12:18.022 ************************************ 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.022 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:18.300 [2024-07-12 12:23:47.152723] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:18.300 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.301 00:12:18.301 real 0m0.070s 00:12:18.301 user 0m0.038s 00:12:18.301 sys 0m0.032s 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:18.301 ************************************ 00:12:18.301 END TEST dd_invalid_oflag 00:12:18.301 ************************************ 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.301 ************************************ 00:12:18.301 START TEST dd_invalid_iflag 00:12:18.301 ************************************ 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:18.301 [2024-07-12 12:23:47.273264] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.301 00:12:18.301 real 0m0.072s 00:12:18.301 user 0m0.044s 00:12:18.301 sys 0m0.027s 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:18.301 ************************************ 00:12:18.301 END TEST dd_invalid_iflag 00:12:18.301 ************************************ 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:18.301 ************************************ 00:12:18.301 START TEST dd_unknown_flag 00:12:18.301 ************************************ 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:18.301 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:18.565 [2024-07-12 12:23:47.403189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:18.565 [2024-07-12 12:23:47.403313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76814 ] 00:12:18.565 [2024-07-12 12:23:47.541387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.565 [2024-07-12 12:23:47.607030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.823 [2024-07-12 12:23:47.663382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:18.823 [2024-07-12 12:23:47.694635] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:18.823 [2024-07-12 12:23:47.694693] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:18.823 [2024-07-12 12:23:47.694752] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:18.823 [2024-07-12 12:23:47.694766] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:18.823 [2024-07-12 12:23:47.695011] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:18.823 [2024-07-12 12:23:47.695029] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:18.823 [2024-07-12 12:23:47.695084] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:18.823 [2024-07-12 12:23:47.695095] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:18.823 [2024-07-12 12:23:47.806504] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.823 00:12:18.823 real 0m0.543s 00:12:18.823 user 0m0.283s 00:12:18.823 sys 0m0.165s 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.823 12:23:47 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:18.823 ************************************ 00:12:18.823 END TEST dd_unknown_flag 00:12:18.823 ************************************ 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:19.080 ************************************ 00:12:19.080 START TEST dd_invalid_json 00:12:19.080 ************************************ 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:19.080 12:23:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:19.081 [2024-07-12 12:23:47.995984] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:19.081 [2024-07-12 12:23:47.996080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76837 ] 00:12:19.081 [2024-07-12 12:23:48.133779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.339 [2024-07-12 12:23:48.223835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.339 [2024-07-12 12:23:48.223947] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:19.339 [2024-07-12 12:23:48.223967] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:19.339 [2024-07-12 12:23:48.223976] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:19.339 [2024-07-12 12:23:48.224012] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:19.339 00:12:19.339 real 0m0.367s 00:12:19.339 user 0m0.188s 00:12:19.339 sys 0m0.076s 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:19.339 ************************************ 00:12:19.339 END TEST dd_invalid_json 00:12:19.339 ************************************ 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:19.339 00:12:19.339 real 0m2.784s 00:12:19.339 user 0m1.362s 00:12:19.339 sys 0m1.062s 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.339 12:23:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:19.339 ************************************ 00:12:19.339 END TEST spdk_dd_negative 00:12:19.339 ************************************ 00:12:19.339 12:23:48 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:12:19.339 00:12:19.339 real 1m14.949s 00:12:19.339 user 0m47.891s 00:12:19.339 sys 0m33.622s 00:12:19.339 12:23:48 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.339 12:23:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:19.339 ************************************ 00:12:19.339 END TEST spdk_dd 00:12:19.339 ************************************ 00:12:19.599 12:23:48 -- common/autotest_common.sh@1142 -- # return 0 00:12:19.599 12:23:48 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:12:19.599 12:23:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:19.599 12:23:48 -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 12:23:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:12:19.599 12:23:48 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:12:19.599 12:23:48 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:19.599 12:23:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.599 12:23:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.599 12:23:48 -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 ************************************ 00:12:19.599 START TEST nvmf_tcp 00:12:19.599 ************************************ 00:12:19.599 12:23:48 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:19.599 * Looking for test storage... 00:12:19.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.599 12:23:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.599 12:23:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.599 12:23:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.599 12:23:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.599 12:23:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.599 12:23:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.599 12:23:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:12:19.599 12:23:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:12:19.599 12:23:48 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.599 12:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:12:19.599 12:23:48 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:19.599 12:23:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.599 12:23:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.599 12:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 ************************************ 00:12:19.599 START TEST nvmf_host_management 00:12:19.599 ************************************ 00:12:19.599 12:23:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:19.858 * Looking for test storage... 00:12:19.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.858 12:23:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:19.859 Cannot find device "nvmf_init_br" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:19.859 Cannot find device "nvmf_tgt_br" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:19.859 Cannot find device "nvmf_tgt_br2" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:19.859 Cannot find device "nvmf_init_br" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:19.859 Cannot find device "nvmf_tgt_br" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:19.859 Cannot find device "nvmf_tgt_br2" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:19.859 Cannot find device "nvmf_br" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:19.859 Cannot find device "nvmf_init_if" 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:19.859 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:20.118 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:20.118 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:20.118 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:20.118 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:20.118 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:20.118 12:23:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:20.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:20.118 00:12:20.118 --- 10.0.0.2 ping statistics --- 00:12:20.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.118 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:20.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:20.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:20.118 00:12:20.118 --- 10.0.0.3 ping statistics --- 00:12:20.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.118 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:20.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:20.118 00:12:20.118 --- 10.0.0.1 ping statistics --- 00:12:20.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.118 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77095 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77095 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 77095 ']' 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.118 12:23:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:20.377 [2024-07-12 12:23:49.213712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:20.377 [2024-07-12 12:23:49.213859] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.377 [2024-07-12 12:23:49.358498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.377 [2024-07-12 12:23:49.438923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.377 [2024-07-12 12:23:49.438985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.377 [2024-07-12 12:23:49.439000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.377 [2024-07-12 12:23:49.439011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.377 [2024-07-12 12:23:49.439021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.377 [2024-07-12 12:23:49.442822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.377 [2024-07-12 12:23:49.442972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.377 [2024-07-12 12:23:49.443096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:20.377 [2024-07-12 12:23:49.443113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.636 [2024-07-12 12:23:49.499835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.202 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.202 [2024-07-12 12:23:50.285177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 Malloc0 00:12:21.461 [2024-07-12 12:23:50.359969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77156 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77156 /var/tmp/bdevperf.sock 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 77156 ']' 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:21.461 { 00:12:21.461 "params": { 00:12:21.461 "name": "Nvme$subsystem", 00:12:21.461 "trtype": "$TEST_TRANSPORT", 00:12:21.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:21.461 "adrfam": "ipv4", 00:12:21.461 "trsvcid": "$NVMF_PORT", 00:12:21.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:21.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:21.461 "hdgst": ${hdgst:-false}, 00:12:21.461 "ddgst": ${ddgst:-false} 00:12:21.461 }, 00:12:21.461 "method": "bdev_nvme_attach_controller" 00:12:21.461 } 00:12:21.461 EOF 00:12:21.461 )") 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:21.461 12:23:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:21.461 "params": { 00:12:21.461 "name": "Nvme0", 00:12:21.461 "trtype": "tcp", 00:12:21.461 "traddr": "10.0.0.2", 00:12:21.461 "adrfam": "ipv4", 00:12:21.461 "trsvcid": "4420", 00:12:21.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:21.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:21.461 "hdgst": false, 00:12:21.461 "ddgst": false 00:12:21.461 }, 00:12:21.461 "method": "bdev_nvme_attach_controller" 00:12:21.461 }' 00:12:21.461 [2024-07-12 12:23:50.461174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:21.461 [2024-07-12 12:23:50.461277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77156 ] 00:12:21.719 [2024-07-12 12:23:50.602099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.719 [2024-07-12 12:23:50.691471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.719 [2024-07-12 12:23:50.754500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:21.978 Running I/O for 10 seconds... 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:22.546 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.547 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.547 [2024-07-12 12:23:51.589532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.589956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.547 [2024-07-12 12:23:51.590213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acfe0 is same with the state(5) to be set 00:12:22.548 [2024-07-12 12:23:51.590415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-07-12 12:23:51.590973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.548 [2024-07-12 12:23:51.590983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.590994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.549 [2024-07-12 12:23:51.591579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.549 [2024-07-12 12:23:51.591588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:22.550 [2024-07-12 12:23:51.591875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.591885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfee6f0 is same with the state(5) to be set 00:12:22.550 [2024-07-12 12:23:51.591951] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfee6f0 was disconnected and freed. reset controller. 00:12:22.550 [2024-07-12 12:23:51.592063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.550 [2024-07-12 12:23:51.592082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.592093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.550 [2024-07-12 12:23:51.592102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.592112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.550 [2024-07-12 12:23:51.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.592131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.550 [2024-07-12 12:23:51.592140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.592149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8d80 is same with the state(5) to be set 00:12:22.550 [2024-07-12 12:23:51.593295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:22.550 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.550 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:22.550 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.550 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:22.550 task offset: 0 on job bdev=Nvme0n1 fails 00:12:22.550 00:12:22.550 Latency(us) 00:12:22.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.550 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:22.550 Job: Nvme0n1 ended in about 0.73 seconds with error 00:12:22.550 Verification LBA range: start 0x0 length 0x400 00:12:22.550 Nvme0n1 : 0.73 1406.38 87.90 87.90 0.00 41844.88 3470.43 37415.10 00:12:22.550 =================================================================================================================== 00:12:22.550 Total : 1406.38 87.90 87.90 0.00 41844.88 3470.43 37415.10 00:12:22.550 [2024-07-12 12:23:51.595198] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:22.550 [2024-07-12 12:23:51.595226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8d80 (9): Bad file descriptor 00:12:22.550 [2024-07-12 12:23:51.600517] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:22.550 [2024-07-12 12:23:51.600628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:22.550 [2024-07-12 12:23:51.600654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.550 [2024-07-12 12:23:51.600673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:22.550 [2024-07-12 12:23:51.600684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:22.550 [2024-07-12 12:23:51.600693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:22.550 [2024-07-12 12:23:51.600701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfb8d80 00:12:22.550 [2024-07-12 12:23:51.600736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8d80 (9): Bad file descriptor 00:12:22.550 [2024-07-12 12:23:51.600754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:22.550 [2024-07-12 12:23:51.600765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:22.550 [2024-07-12 12:23:51.600775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:22.550 [2024-07-12 12:23:51.600812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:22.550 12:23:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.550 12:23:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77156 00:12:23.926 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77156) - No such process 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.926 { 00:12:23.926 "params": { 00:12:23.926 "name": "Nvme$subsystem", 00:12:23.926 "trtype": "$TEST_TRANSPORT", 00:12:23.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.926 "adrfam": "ipv4", 00:12:23.926 "trsvcid": "$NVMF_PORT", 00:12:23.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.926 "hdgst": ${hdgst:-false}, 00:12:23.926 "ddgst": ${ddgst:-false} 00:12:23.926 }, 00:12:23.926 "method": "bdev_nvme_attach_controller" 00:12:23.926 } 00:12:23.926 EOF 00:12:23.926 )") 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:23.926 12:23:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.926 "params": { 00:12:23.926 "name": "Nvme0", 00:12:23.926 "trtype": "tcp", 00:12:23.926 "traddr": "10.0.0.2", 00:12:23.926 "adrfam": "ipv4", 00:12:23.926 "trsvcid": "4420", 00:12:23.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:23.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:23.926 "hdgst": false, 00:12:23.926 "ddgst": false 00:12:23.926 }, 00:12:23.926 "method": "bdev_nvme_attach_controller" 00:12:23.926 }' 00:12:23.926 [2024-07-12 12:23:52.664181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:23.926 [2024-07-12 12:23:52.664264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77194 ] 00:12:23.926 [2024-07-12 12:23:52.796311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.926 [2024-07-12 12:23:52.880206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.926 [2024-07-12 12:23:52.943727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:24.182 Running I/O for 1 seconds... 00:12:25.114 00:12:25.114 Latency(us) 00:12:25.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.114 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:25.114 Verification LBA range: start 0x0 length 0x400 00:12:25.114 Nvme0n1 : 1.03 1432.81 89.55 0.00 0.00 43811.89 4468.36 41704.73 00:12:25.114 =================================================================================================================== 00:12:25.114 Total : 1432.81 89.55 0.00 0.00 43811.89 4468.36 41704.73 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.371 rmmod nvme_tcp 00:12:25.371 rmmod nvme_fabrics 00:12:25.371 rmmod nvme_keyring 00:12:25.371 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77095 ']' 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77095 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 77095 ']' 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 77095 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77095 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:25.630 killing process with pid 77095 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77095' 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 77095 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 77095 00:12:25.630 [2024-07-12 12:23:54.683816] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.630 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:25.889 ************************************ 00:12:25.889 END TEST nvmf_host_management 00:12:25.889 ************************************ 00:12:25.889 00:12:25.889 real 0m6.117s 00:12:25.889 user 0m23.744s 00:12:25.889 sys 0m1.604s 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.889 12:23:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:25.889 12:23:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:25.889 12:23:54 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:25.889 12:23:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.889 12:23:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.889 12:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.889 ************************************ 00:12:25.889 START TEST nvmf_lvol 00:12:25.889 ************************************ 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:25.889 * Looking for test storage... 00:12:25.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:25.889 Cannot find device "nvmf_tgt_br" 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.889 Cannot find device "nvmf_tgt_br2" 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:25.889 Cannot find device "nvmf_tgt_br" 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:25.889 Cannot find device "nvmf_tgt_br2" 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:12:25.889 12:23:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:26.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:26.149 00:12:26.149 --- 10.0.0.2 ping statistics --- 00:12:26.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.149 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:26.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:26.149 00:12:26.149 --- 10.0.0.3 ping statistics --- 00:12:26.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.149 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:26.149 00:12:26.149 --- 10.0.0.1 ping statistics --- 00:12:26.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.149 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.149 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=77407 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 77407 00:12:26.407 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 77407 ']' 00:12:26.408 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.408 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.408 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.408 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.408 12:23:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 [2024-07-12 12:23:55.296758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:26.408 [2024-07-12 12:23:55.296846] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.408 [2024-07-12 12:23:55.432042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.666 [2024-07-12 12:23:55.522106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.666 [2024-07-12 12:23:55.522158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.666 [2024-07-12 12:23:55.522169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.666 [2024-07-12 12:23:55.522178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.666 [2024-07-12 12:23:55.522185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.666 [2024-07-12 12:23:55.522343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.666 [2024-07-12 12:23:55.522483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.666 [2024-07-12 12:23:55.522486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.667 [2024-07-12 12:23:55.576779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:27.232 12:23:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.232 12:23:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:27.232 12:23:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.232 12:23:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:27.232 12:23:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.491 12:23:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.491 12:23:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:27.748 [2024-07-12 12:23:56.661878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.748 12:23:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.006 12:23:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:28.006 12:23:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:28.264 12:23:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:28.264 12:23:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:28.829 12:23:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:29.086 12:23:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f8d41eb9-4c69-4f0c-a342-79486dea0fc5 00:12:29.086 12:23:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f8d41eb9-4c69-4f0c-a342-79486dea0fc5 lvol 20 00:12:29.345 12:23:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b0fe7ad0-d421-49e0-af5e-f11a5764223e 00:12:29.345 12:23:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:29.603 12:23:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b0fe7ad0-d421-49e0-af5e-f11a5764223e 00:12:29.861 12:23:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:30.127 [2024-07-12 12:23:59.012175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.127 12:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:30.391 12:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:30.391 12:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77488 00:12:30.391 12:23:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:31.325 12:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b0fe7ad0-d421-49e0-af5e-f11a5764223e MY_SNAPSHOT 00:12:31.584 12:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1902f86d-3d21-4d1b-be5a-a50aecf5a0ab 00:12:31.584 12:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b0fe7ad0-d421-49e0-af5e-f11a5764223e 30 00:12:31.843 12:24:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1902f86d-3d21-4d1b-be5a-a50aecf5a0ab MY_CLONE 00:12:32.409 12:24:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=60b43fb6-0780-41be-ade1-17e606ec698c 00:12:32.409 12:24:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 60b43fb6-0780-41be-ade1-17e606ec698c 00:12:32.667 12:24:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77488 00:12:40.806 Initializing NVMe Controllers 00:12:40.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:40.806 Controller IO queue size 128, less than required. 00:12:40.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:40.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:40.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:40.806 Initialization complete. Launching workers. 00:12:40.806 ======================================================== 00:12:40.806 Latency(us) 00:12:40.806 Device Information : IOPS MiB/s Average min max 00:12:40.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9483.70 37.05 13502.06 2234.83 51851.05 00:12:40.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9410.40 36.76 13609.23 1465.31 62412.15 00:12:40.806 ======================================================== 00:12:40.806 Total : 18894.10 73.81 13555.44 1465.31 62412.15 00:12:40.806 00:12:40.806 12:24:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.806 12:24:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b0fe7ad0-d421-49e0-af5e-f11a5764223e 00:12:41.063 12:24:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8d41eb9-4c69-4f0c-a342-79486dea0fc5 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.629 rmmod nvme_tcp 00:12:41.629 rmmod nvme_fabrics 00:12:41.629 rmmod nvme_keyring 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 77407 ']' 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 77407 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 77407 ']' 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 77407 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77407 00:12:41.629 killing process with pid 77407 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77407' 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 77407 00:12:41.629 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 77407 00:12:41.887 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:41.887 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:41.887 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:41.887 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:41.888 00:12:41.888 real 0m16.051s 00:12:41.888 user 1m6.767s 00:12:41.888 sys 0m4.222s 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:41.888 ************************************ 00:12:41.888 END TEST nvmf_lvol 00:12:41.888 ************************************ 00:12:41.888 12:24:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.888 12:24:10 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:41.888 12:24:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.888 12:24:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.888 12:24:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.888 ************************************ 00:12:41.888 START TEST nvmf_lvs_grow 00:12:41.888 ************************************ 00:12:41.888 12:24:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:42.146 * Looking for test storage... 00:12:42.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.146 12:24:10 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:42.146 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:42.147 Cannot find device "nvmf_tgt_br" 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:42.147 Cannot find device "nvmf_tgt_br2" 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:42.147 Cannot find device "nvmf_tgt_br" 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:42.147 Cannot find device "nvmf_tgt_br2" 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:42.147 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:42.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:12:42.405 00:12:42.405 --- 10.0.0.2 ping statistics --- 00:12:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.405 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:42.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:42.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:42.405 00:12:42.405 --- 10.0.0.3 ping statistics --- 00:12:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.405 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:42.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:42.405 00:12:42.405 --- 10.0.0.1 ping statistics --- 00:12:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.405 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:42.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=77808 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 77808 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 77808 ']' 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.405 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.406 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.406 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.406 12:24:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:42.406 [2024-07-12 12:24:11.411914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:42.406 [2024-07-12 12:24:11.412002] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.663 [2024-07-12 12:24:11.553960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.663 [2024-07-12 12:24:11.641169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.663 [2024-07-12 12:24:11.641230] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.663 [2024-07-12 12:24:11.641244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.663 [2024-07-12 12:24:11.641255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.663 [2024-07-12 12:24:11.641264] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.663 [2024-07-12 12:24:11.641292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.663 [2024-07-12 12:24:11.699183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.595 12:24:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:43.853 [2024-07-12 12:24:12.795427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:43.853 ************************************ 00:12:43.853 START TEST lvs_grow_clean 00:12:43.853 ************************************ 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:43.853 12:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:44.117 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:44.117 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:44.374 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9c0a24d1-768b-407b-85be-47474029ac93 00:12:44.374 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:44.374 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:44.631 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:44.631 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:44.631 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9c0a24d1-768b-407b-85be-47474029ac93 lvol 150 00:12:44.887 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b039c47d-d5d7-45be-aec7-837e7865f524 00:12:44.888 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:44.888 12:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:45.145 [2024-07-12 12:24:14.160571] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:45.145 [2024-07-12 12:24:14.160655] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:45.145 true 00:12:45.145 12:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:45.145 12:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:45.406 12:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:45.406 12:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:45.664 12:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b039c47d-d5d7-45be-aec7-837e7865f524 00:12:45.921 12:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:46.178 [2024-07-12 12:24:15.121629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.178 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77896 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77896 /var/tmp/bdevperf.sock 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 77896 ']' 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.435 12:24:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:46.435 [2024-07-12 12:24:15.465695] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:46.435 [2024-07-12 12:24:15.465829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77896 ] 00:12:46.692 [2024-07-12 12:24:15.605264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.692 [2024-07-12 12:24:15.691829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.692 [2024-07-12 12:24:15.749145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.624 12:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.624 12:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:47.624 12:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:47.881 Nvme0n1 00:12:47.881 12:24:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:48.152 [ 00:12:48.152 { 00:12:48.152 "name": "Nvme0n1", 00:12:48.152 "aliases": [ 00:12:48.152 "b039c47d-d5d7-45be-aec7-837e7865f524" 00:12:48.152 ], 00:12:48.152 "product_name": "NVMe disk", 00:12:48.152 "block_size": 4096, 00:12:48.152 "num_blocks": 38912, 00:12:48.152 "uuid": "b039c47d-d5d7-45be-aec7-837e7865f524", 00:12:48.152 "assigned_rate_limits": { 00:12:48.152 "rw_ios_per_sec": 0, 00:12:48.152 "rw_mbytes_per_sec": 0, 00:12:48.152 "r_mbytes_per_sec": 0, 00:12:48.152 "w_mbytes_per_sec": 0 00:12:48.152 }, 00:12:48.152 "claimed": false, 00:12:48.152 "zoned": false, 00:12:48.152 "supported_io_types": { 00:12:48.152 "read": true, 00:12:48.152 "write": true, 00:12:48.152 "unmap": true, 00:12:48.152 "flush": true, 00:12:48.152 "reset": true, 00:12:48.152 "nvme_admin": true, 00:12:48.152 "nvme_io": true, 00:12:48.152 "nvme_io_md": false, 00:12:48.152 "write_zeroes": true, 00:12:48.152 "zcopy": false, 00:12:48.152 "get_zone_info": false, 00:12:48.152 "zone_management": false, 00:12:48.152 "zone_append": false, 00:12:48.152 "compare": true, 00:12:48.152 "compare_and_write": true, 00:12:48.152 "abort": true, 00:12:48.152 "seek_hole": false, 00:12:48.152 "seek_data": false, 00:12:48.152 "copy": true, 00:12:48.152 "nvme_iov_md": false 00:12:48.152 }, 00:12:48.152 "memory_domains": [ 00:12:48.152 { 00:12:48.152 "dma_device_id": "system", 00:12:48.152 "dma_device_type": 1 00:12:48.152 } 00:12:48.152 ], 00:12:48.152 "driver_specific": { 00:12:48.152 "nvme": [ 00:12:48.152 { 00:12:48.152 "trid": { 00:12:48.152 "trtype": "TCP", 00:12:48.152 "adrfam": "IPv4", 00:12:48.152 "traddr": "10.0.0.2", 00:12:48.152 "trsvcid": "4420", 00:12:48.152 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:48.152 }, 00:12:48.152 "ctrlr_data": { 00:12:48.152 "cntlid": 1, 00:12:48.152 "vendor_id": "0x8086", 00:12:48.152 "model_number": "SPDK bdev Controller", 00:12:48.152 "serial_number": "SPDK0", 00:12:48.152 "firmware_revision": "24.09", 00:12:48.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:48.152 "oacs": { 00:12:48.152 "security": 0, 00:12:48.152 "format": 0, 00:12:48.152 "firmware": 0, 00:12:48.152 "ns_manage": 0 00:12:48.153 }, 00:12:48.153 "multi_ctrlr": true, 00:12:48.153 "ana_reporting": false 00:12:48.153 }, 00:12:48.153 "vs": { 00:12:48.153 "nvme_version": "1.3" 00:12:48.153 }, 00:12:48.153 "ns_data": { 00:12:48.153 "id": 1, 00:12:48.153 "can_share": true 00:12:48.153 } 00:12:48.153 } 00:12:48.153 ], 00:12:48.153 "mp_policy": "active_passive" 00:12:48.153 } 00:12:48.153 } 00:12:48.153 ] 00:12:48.153 12:24:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77922 00:12:48.153 12:24:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:48.153 12:24:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:48.153 Running I/O for 10 seconds... 00:12:49.082 Latency(us) 00:12:49.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.082 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:12:49.082 =================================================================================================================== 00:12:49.082 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:12:49.082 00:12:50.014 12:24:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:50.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.272 Nvme0n1 : 2.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:12:50.272 =================================================================================================================== 00:12:50.272 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:12:50.272 00:12:50.529 true 00:12:50.529 12:24:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:50.529 12:24:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:50.785 12:24:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:50.785 12:24:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:50.785 12:24:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 77922 00:12:51.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.350 Nvme0n1 : 3.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:12:51.350 =================================================================================================================== 00:12:51.350 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:12:51.350 00:12:52.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.283 Nvme0n1 : 4.00 7731.75 30.20 0.00 0.00 0.00 0.00 0.00 00:12:52.283 =================================================================================================================== 00:12:52.283 Total : 7731.75 30.20 0.00 0.00 0.00 0.00 0.00 00:12:52.283 00:12:53.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.216 Nvme0n1 : 5.00 7684.00 30.02 0.00 0.00 0.00 0.00 0.00 00:12:53.217 =================================================================================================================== 00:12:53.217 Total : 7684.00 30.02 0.00 0.00 0.00 0.00 0.00 00:12:53.217 00:12:54.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.147 Nvme0n1 : 6.00 7673.33 29.97 0.00 0.00 0.00 0.00 0.00 00:12:54.147 =================================================================================================================== 00:12:54.147 Total : 7673.33 29.97 0.00 0.00 0.00 0.00 0.00 00:12:54.147 00:12:55.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.079 Nvme0n1 : 7.00 7683.86 30.02 0.00 0.00 0.00 0.00 0.00 00:12:55.079 =================================================================================================================== 00:12:55.079 Total : 7683.86 30.02 0.00 0.00 0.00 0.00 0.00 00:12:55.079 00:12:56.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.452 Nvme0n1 : 8.00 7675.88 29.98 0.00 0.00 0.00 0.00 0.00 00:12:56.452 =================================================================================================================== 00:12:56.452 Total : 7675.88 29.98 0.00 0.00 0.00 0.00 0.00 00:12:56.452 00:12:57.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.383 Nvme0n1 : 9.00 7655.56 29.90 0.00 0.00 0.00 0.00 0.00 00:12:57.383 =================================================================================================================== 00:12:57.383 Total : 7655.56 29.90 0.00 0.00 0.00 0.00 0.00 00:12:57.383 00:12:58.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.315 Nvme0n1 : 10.00 7639.30 29.84 0.00 0.00 0.00 0.00 0.00 00:12:58.315 =================================================================================================================== 00:12:58.315 Total : 7639.30 29.84 0.00 0.00 0.00 0.00 0.00 00:12:58.315 00:12:58.315 00:12:58.315 Latency(us) 00:12:58.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.315 Nvme0n1 : 10.00 7636.20 29.83 0.00 0.00 16753.72 5719.51 35746.91 00:12:58.315 =================================================================================================================== 00:12:58.315 Total : 7636.20 29.83 0.00 0.00 16753.72 5719.51 35746.91 00:12:58.315 0 00:12:58.315 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77896 00:12:58.315 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 77896 ']' 00:12:58.315 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 77896 00:12:58.315 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77896 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:58.316 killing process with pid 77896 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77896' 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 77896 00:12:58.316 Received shutdown signal, test time was about 10.000000 seconds 00:12:58.316 00:12:58.316 Latency(us) 00:12:58.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.316 =================================================================================================================== 00:12:58.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.316 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 77896 00:12:58.573 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:58.831 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:59.088 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:59.088 12:24:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:59.345 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:59.345 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:59.345 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:59.602 [2024-07-12 12:24:28.512370] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:59.602 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:12:59.860 request: 00:12:59.860 { 00:12:59.860 "uuid": "9c0a24d1-768b-407b-85be-47474029ac93", 00:12:59.860 "method": "bdev_lvol_get_lvstores", 00:12:59.860 "req_id": 1 00:12:59.860 } 00:12:59.860 Got JSON-RPC error response 00:12:59.860 response: 00:12:59.860 { 00:12:59.860 "code": -19, 00:12:59.860 "message": "No such device" 00:12:59.860 } 00:12:59.860 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:59.860 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:59.860 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:59.860 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:59.860 12:24:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:00.116 aio_bdev 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b039c47d-d5d7-45be-aec7-837e7865f524 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=b039c47d-d5d7-45be-aec7-837e7865f524 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:00.116 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:00.373 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b039c47d-d5d7-45be-aec7-837e7865f524 -t 2000 00:13:00.630 [ 00:13:00.630 { 00:13:00.630 "name": "b039c47d-d5d7-45be-aec7-837e7865f524", 00:13:00.630 "aliases": [ 00:13:00.630 "lvs/lvol" 00:13:00.630 ], 00:13:00.630 "product_name": "Logical Volume", 00:13:00.630 "block_size": 4096, 00:13:00.630 "num_blocks": 38912, 00:13:00.630 "uuid": "b039c47d-d5d7-45be-aec7-837e7865f524", 00:13:00.630 "assigned_rate_limits": { 00:13:00.630 "rw_ios_per_sec": 0, 00:13:00.630 "rw_mbytes_per_sec": 0, 00:13:00.630 "r_mbytes_per_sec": 0, 00:13:00.630 "w_mbytes_per_sec": 0 00:13:00.630 }, 00:13:00.630 "claimed": false, 00:13:00.630 "zoned": false, 00:13:00.630 "supported_io_types": { 00:13:00.630 "read": true, 00:13:00.630 "write": true, 00:13:00.630 "unmap": true, 00:13:00.630 "flush": false, 00:13:00.630 "reset": true, 00:13:00.630 "nvme_admin": false, 00:13:00.630 "nvme_io": false, 00:13:00.630 "nvme_io_md": false, 00:13:00.630 "write_zeroes": true, 00:13:00.630 "zcopy": false, 00:13:00.630 "get_zone_info": false, 00:13:00.630 "zone_management": false, 00:13:00.630 "zone_append": false, 00:13:00.630 "compare": false, 00:13:00.630 "compare_and_write": false, 00:13:00.630 "abort": false, 00:13:00.630 "seek_hole": true, 00:13:00.631 "seek_data": true, 00:13:00.631 "copy": false, 00:13:00.631 "nvme_iov_md": false 00:13:00.631 }, 00:13:00.631 "driver_specific": { 00:13:00.631 "lvol": { 00:13:00.631 "lvol_store_uuid": "9c0a24d1-768b-407b-85be-47474029ac93", 00:13:00.631 "base_bdev": "aio_bdev", 00:13:00.631 "thin_provision": false, 00:13:00.631 "num_allocated_clusters": 38, 00:13:00.631 "snapshot": false, 00:13:00.631 "clone": false, 00:13:00.631 "esnap_clone": false 00:13:00.631 } 00:13:00.631 } 00:13:00.631 } 00:13:00.631 ] 00:13:00.631 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:00.631 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:00.631 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:13:00.888 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:00.888 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c0a24d1-768b-407b-85be-47474029ac93 00:13:00.888 12:24:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:01.175 12:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:01.175 12:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b039c47d-d5d7-45be-aec7-837e7865f524 00:13:01.432 12:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c0a24d1-768b-407b-85be-47474029ac93 00:13:01.690 12:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:01.946 12:24:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:02.203 00:13:02.203 real 0m18.438s 00:13:02.203 user 0m17.466s 00:13:02.203 sys 0m2.524s 00:13:02.203 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.203 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:02.203 ************************************ 00:13:02.203 END TEST lvs_grow_clean 00:13:02.203 ************************************ 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:02.460 ************************************ 00:13:02.460 START TEST lvs_grow_dirty 00:13:02.460 ************************************ 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:02.460 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:02.716 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:02.716 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:02.972 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:02.972 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:02.972 12:24:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:03.231 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:03.231 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:03.231 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b lvol 150 00:13:03.489 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:03.489 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:03.489 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:03.746 [2024-07-12 12:24:32.635674] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:03.746 [2024-07-12 12:24:32.635767] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:03.746 true 00:13:03.746 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:03.746 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:04.003 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:04.003 12:24:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:04.259 12:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:04.515 12:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:04.773 [2024-07-12 12:24:33.696278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.773 12:24:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78175 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78175 /var/tmp/bdevperf.sock 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 78175 ']' 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.032 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:05.032 [2024-07-12 12:24:34.053493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:05.032 [2024-07-12 12:24:34.053579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78175 ] 00:13:05.290 [2024-07-12 12:24:34.186053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.290 [2024-07-12 12:24:34.272077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.290 [2024-07-12 12:24:34.326716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:05.548 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.548 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:05.548 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:05.806 Nvme0n1 00:13:05.806 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:06.065 [ 00:13:06.065 { 00:13:06.065 "name": "Nvme0n1", 00:13:06.065 "aliases": [ 00:13:06.065 "0567e87c-c3eb-4be0-ac44-af14c147bdcc" 00:13:06.065 ], 00:13:06.065 "product_name": "NVMe disk", 00:13:06.065 "block_size": 4096, 00:13:06.065 "num_blocks": 38912, 00:13:06.065 "uuid": "0567e87c-c3eb-4be0-ac44-af14c147bdcc", 00:13:06.065 "assigned_rate_limits": { 00:13:06.065 "rw_ios_per_sec": 0, 00:13:06.065 "rw_mbytes_per_sec": 0, 00:13:06.065 "r_mbytes_per_sec": 0, 00:13:06.065 "w_mbytes_per_sec": 0 00:13:06.065 }, 00:13:06.065 "claimed": false, 00:13:06.065 "zoned": false, 00:13:06.065 "supported_io_types": { 00:13:06.065 "read": true, 00:13:06.065 "write": true, 00:13:06.065 "unmap": true, 00:13:06.065 "flush": true, 00:13:06.065 "reset": true, 00:13:06.065 "nvme_admin": true, 00:13:06.065 "nvme_io": true, 00:13:06.065 "nvme_io_md": false, 00:13:06.065 "write_zeroes": true, 00:13:06.065 "zcopy": false, 00:13:06.065 "get_zone_info": false, 00:13:06.065 "zone_management": false, 00:13:06.065 "zone_append": false, 00:13:06.065 "compare": true, 00:13:06.065 "compare_and_write": true, 00:13:06.065 "abort": true, 00:13:06.065 "seek_hole": false, 00:13:06.065 "seek_data": false, 00:13:06.065 "copy": true, 00:13:06.065 "nvme_iov_md": false 00:13:06.065 }, 00:13:06.065 "memory_domains": [ 00:13:06.065 { 00:13:06.065 "dma_device_id": "system", 00:13:06.065 "dma_device_type": 1 00:13:06.065 } 00:13:06.065 ], 00:13:06.065 "driver_specific": { 00:13:06.065 "nvme": [ 00:13:06.065 { 00:13:06.065 "trid": { 00:13:06.065 "trtype": "TCP", 00:13:06.065 "adrfam": "IPv4", 00:13:06.065 "traddr": "10.0.0.2", 00:13:06.065 "trsvcid": "4420", 00:13:06.065 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:06.065 }, 00:13:06.065 "ctrlr_data": { 00:13:06.065 "cntlid": 1, 00:13:06.065 "vendor_id": "0x8086", 00:13:06.065 "model_number": "SPDK bdev Controller", 00:13:06.065 "serial_number": "SPDK0", 00:13:06.065 "firmware_revision": "24.09", 00:13:06.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:06.065 "oacs": { 00:13:06.065 "security": 0, 00:13:06.065 "format": 0, 00:13:06.065 "firmware": 0, 00:13:06.065 "ns_manage": 0 00:13:06.065 }, 00:13:06.065 "multi_ctrlr": true, 00:13:06.065 "ana_reporting": false 00:13:06.065 }, 00:13:06.065 "vs": { 00:13:06.065 "nvme_version": "1.3" 00:13:06.065 }, 00:13:06.065 "ns_data": { 00:13:06.065 "id": 1, 00:13:06.065 "can_share": true 00:13:06.065 } 00:13:06.065 } 00:13:06.065 ], 00:13:06.065 "mp_policy": "active_passive" 00:13:06.065 } 00:13:06.065 } 00:13:06.065 ] 00:13:06.065 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78191 00:13:06.065 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:06.065 12:24:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:06.065 Running I/O for 10 seconds... 00:13:07.439 Latency(us) 00:13:07.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.439 Nvme0n1 : 1.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:13:07.439 =================================================================================================================== 00:13:07.439 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:13:07.439 00:13:08.004 12:24:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:08.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.262 Nvme0n1 : 2.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:13:08.262 =================================================================================================================== 00:13:08.262 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:13:08.262 00:13:08.262 true 00:13:08.262 12:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:08.262 12:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:08.560 12:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:08.560 12:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:08.560 12:24:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 78191 00:13:09.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.126 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:13:09.126 =================================================================================================================== 00:13:09.126 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:13:09.126 00:13:10.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.060 Nvme0n1 : 4.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:13:10.060 =================================================================================================================== 00:13:10.060 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:13:10.060 00:13:11.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.432 Nvme0n1 : 5.00 7670.80 29.96 0.00 0.00 0.00 0.00 0.00 00:13:11.432 =================================================================================================================== 00:13:11.433 Total : 7670.80 29.96 0.00 0.00 0.00 0.00 0.00 00:13:11.433 00:13:12.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.366 Nvme0n1 : 6.00 7641.17 29.85 0.00 0.00 0.00 0.00 0.00 00:13:12.366 =================================================================================================================== 00:13:12.366 Total : 7641.17 29.85 0.00 0.00 0.00 0.00 0.00 00:13:12.366 00:13:13.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.370 Nvme0n1 : 7.00 7231.14 28.25 0.00 0.00 0.00 0.00 0.00 00:13:13.370 =================================================================================================================== 00:13:13.370 Total : 7231.14 28.25 0.00 0.00 0.00 0.00 0.00 00:13:13.370 00:13:14.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.302 Nvme0n1 : 8.00 7232.12 28.25 0.00 0.00 0.00 0.00 0.00 00:13:14.302 =================================================================================================================== 00:13:14.302 Total : 7232.12 28.25 0.00 0.00 0.00 0.00 0.00 00:13:14.302 00:13:15.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.234 Nvme0n1 : 9.00 7232.89 28.25 0.00 0.00 0.00 0.00 0.00 00:13:15.234 =================================================================================================================== 00:13:15.234 Total : 7232.89 28.25 0.00 0.00 0.00 0.00 0.00 00:13:15.234 00:13:16.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.169 Nvme0n1 : 10.00 7170.00 28.01 0.00 0.00 0.00 0.00 0.00 00:13:16.169 =================================================================================================================== 00:13:16.169 Total : 7170.00 28.01 0.00 0.00 0.00 0.00 0.00 00:13:16.169 00:13:16.169 00:13:16.169 Latency(us) 00:13:16.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.169 Nvme0n1 : 10.00 7180.19 28.05 0.00 0.00 17820.24 6940.86 379393.86 00:13:16.169 =================================================================================================================== 00:13:16.169 Total : 7180.19 28.05 0.00 0.00 17820.24 6940.86 379393.86 00:13:16.169 0 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78175 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 78175 ']' 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 78175 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78175 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78175' 00:13:16.169 killing process with pid 78175 00:13:16.169 Received shutdown signal, test time was about 10.000000 seconds 00:13:16.169 00:13:16.169 Latency(us) 00:13:16.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.169 =================================================================================================================== 00:13:16.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 78175 00:13:16.169 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 78175 00:13:16.487 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:16.744 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:17.002 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:17.002 12:24:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77808 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77808 00:13:17.260 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77808 Killed "${NVMF_APP[@]}" "$@" 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=78324 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 78324 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 78324 ']' 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.260 12:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:17.260 [2024-07-12 12:24:46.229719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:17.260 [2024-07-12 12:24:46.229854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.518 [2024-07-12 12:24:46.365384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.518 [2024-07-12 12:24:46.452955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.518 [2024-07-12 12:24:46.453006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.518 [2024-07-12 12:24:46.453018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.518 [2024-07-12 12:24:46.453027] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.518 [2024-07-12 12:24:46.453034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.518 [2024-07-12 12:24:46.453064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.518 [2024-07-12 12:24:46.509242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.452 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:18.452 [2024-07-12 12:24:47.498854] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:18.452 [2024-07-12 12:24:47.499112] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:18.452 [2024-07-12 12:24:47.499357] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:18.709 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:18.710 12:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0567e87c-c3eb-4be0-ac44-af14c147bdcc -t 2000 00:13:19.276 [ 00:13:19.276 { 00:13:19.276 "name": "0567e87c-c3eb-4be0-ac44-af14c147bdcc", 00:13:19.276 "aliases": [ 00:13:19.276 "lvs/lvol" 00:13:19.276 ], 00:13:19.276 "product_name": "Logical Volume", 00:13:19.276 "block_size": 4096, 00:13:19.276 "num_blocks": 38912, 00:13:19.276 "uuid": "0567e87c-c3eb-4be0-ac44-af14c147bdcc", 00:13:19.276 "assigned_rate_limits": { 00:13:19.276 "rw_ios_per_sec": 0, 00:13:19.276 "rw_mbytes_per_sec": 0, 00:13:19.276 "r_mbytes_per_sec": 0, 00:13:19.276 "w_mbytes_per_sec": 0 00:13:19.276 }, 00:13:19.276 "claimed": false, 00:13:19.276 "zoned": false, 00:13:19.276 "supported_io_types": { 00:13:19.276 "read": true, 00:13:19.276 "write": true, 00:13:19.276 "unmap": true, 00:13:19.276 "flush": false, 00:13:19.276 "reset": true, 00:13:19.276 "nvme_admin": false, 00:13:19.276 "nvme_io": false, 00:13:19.276 "nvme_io_md": false, 00:13:19.276 "write_zeroes": true, 00:13:19.276 "zcopy": false, 00:13:19.276 "get_zone_info": false, 00:13:19.276 "zone_management": false, 00:13:19.276 "zone_append": false, 00:13:19.276 "compare": false, 00:13:19.276 "compare_and_write": false, 00:13:19.276 "abort": false, 00:13:19.276 "seek_hole": true, 00:13:19.276 "seek_data": true, 00:13:19.276 "copy": false, 00:13:19.276 "nvme_iov_md": false 00:13:19.276 }, 00:13:19.276 "driver_specific": { 00:13:19.276 "lvol": { 00:13:19.276 "lvol_store_uuid": "632a2d09-8641-47e3-8876-f1d2fa79bc8b", 00:13:19.276 "base_bdev": "aio_bdev", 00:13:19.276 "thin_provision": false, 00:13:19.276 "num_allocated_clusters": 38, 00:13:19.276 "snapshot": false, 00:13:19.276 "clone": false, 00:13:19.276 "esnap_clone": false 00:13:19.276 } 00:13:19.276 } 00:13:19.276 } 00:13:19.276 ] 00:13:19.276 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:19.276 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:19.276 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:19.276 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:19.276 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:19.276 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:19.842 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:19.842 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:19.842 [2024-07-12 12:24:48.896128] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:20.101 12:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:20.101 request: 00:13:20.101 { 00:13:20.101 "uuid": "632a2d09-8641-47e3-8876-f1d2fa79bc8b", 00:13:20.101 "method": "bdev_lvol_get_lvstores", 00:13:20.101 "req_id": 1 00:13:20.101 } 00:13:20.101 Got JSON-RPC error response 00:13:20.101 response: 00:13:20.101 { 00:13:20.101 "code": -19, 00:13:20.101 "message": "No such device" 00:13:20.101 } 00:13:20.101 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:20.101 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.101 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.101 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.101 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.359 aio_bdev 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:20.617 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0567e87c-c3eb-4be0-ac44-af14c147bdcc -t 2000 00:13:20.875 [ 00:13:20.875 { 00:13:20.875 "name": "0567e87c-c3eb-4be0-ac44-af14c147bdcc", 00:13:20.875 "aliases": [ 00:13:20.875 "lvs/lvol" 00:13:20.875 ], 00:13:20.875 "product_name": "Logical Volume", 00:13:20.875 "block_size": 4096, 00:13:20.875 "num_blocks": 38912, 00:13:20.875 "uuid": "0567e87c-c3eb-4be0-ac44-af14c147bdcc", 00:13:20.875 "assigned_rate_limits": { 00:13:20.875 "rw_ios_per_sec": 0, 00:13:20.875 "rw_mbytes_per_sec": 0, 00:13:20.875 "r_mbytes_per_sec": 0, 00:13:20.875 "w_mbytes_per_sec": 0 00:13:20.875 }, 00:13:20.875 "claimed": false, 00:13:20.875 "zoned": false, 00:13:20.875 "supported_io_types": { 00:13:20.875 "read": true, 00:13:20.875 "write": true, 00:13:20.875 "unmap": true, 00:13:20.875 "flush": false, 00:13:20.875 "reset": true, 00:13:20.875 "nvme_admin": false, 00:13:20.875 "nvme_io": false, 00:13:20.875 "nvme_io_md": false, 00:13:20.875 "write_zeroes": true, 00:13:20.875 "zcopy": false, 00:13:20.875 "get_zone_info": false, 00:13:20.875 "zone_management": false, 00:13:20.875 "zone_append": false, 00:13:20.875 "compare": false, 00:13:20.875 "compare_and_write": false, 00:13:20.875 "abort": false, 00:13:20.875 "seek_hole": true, 00:13:20.875 "seek_data": true, 00:13:20.876 "copy": false, 00:13:20.876 "nvme_iov_md": false 00:13:20.876 }, 00:13:20.876 "driver_specific": { 00:13:20.876 "lvol": { 00:13:20.876 "lvol_store_uuid": "632a2d09-8641-47e3-8876-f1d2fa79bc8b", 00:13:20.876 "base_bdev": "aio_bdev", 00:13:20.876 "thin_provision": false, 00:13:20.876 "num_allocated_clusters": 38, 00:13:20.876 "snapshot": false, 00:13:20.876 "clone": false, 00:13:20.876 "esnap_clone": false 00:13:20.876 } 00:13:20.876 } 00:13:20.876 } 00:13:20.876 ] 00:13:20.876 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:20.876 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:20.876 12:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:21.134 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:21.134 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:21.134 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:21.391 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:21.391 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0567e87c-c3eb-4be0-ac44-af14c147bdcc 00:13:21.647 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 632a2d09-8641-47e3-8876-f1d2fa79bc8b 00:13:21.903 12:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:22.159 12:24:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:22.723 00:13:22.723 real 0m20.230s 00:13:22.723 user 0m42.933s 00:13:22.723 sys 0m7.676s 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:22.723 ************************************ 00:13:22.723 END TEST lvs_grow_dirty 00:13:22.723 ************************************ 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:22.723 nvmf_trace.0 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.723 rmmod nvme_tcp 00:13:22.723 rmmod nvme_fabrics 00:13:22.723 rmmod nvme_keyring 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.723 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:22.724 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:22.724 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 78324 ']' 00:13:22.724 12:24:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 78324 00:13:22.724 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 78324 ']' 00:13:22.724 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 78324 00:13:22.724 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78324 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:22.981 killing process with pid 78324 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78324' 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 78324 00:13:22.981 12:24:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 78324 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.981 12:24:52 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:23.239 00:13:23.239 real 0m41.168s 00:13:23.239 user 1m6.951s 00:13:23.239 sys 0m10.920s 00:13:23.239 12:24:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.239 12:24:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:23.239 ************************************ 00:13:23.239 END TEST nvmf_lvs_grow 00:13:23.239 ************************************ 00:13:23.239 12:24:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:23.239 12:24:52 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:23.239 12:24:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:23.239 12:24:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.239 12:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.239 ************************************ 00:13:23.239 START TEST nvmf_bdev_io_wait 00:13:23.239 ************************************ 00:13:23.239 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:23.239 * Looking for test storage... 00:13:23.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:23.239 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:23.240 Cannot find device "nvmf_tgt_br" 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.240 Cannot find device "nvmf_tgt_br2" 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:23.240 Cannot find device "nvmf_tgt_br" 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:23.240 Cannot find device "nvmf_tgt_br2" 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:23.240 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:23.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:23.498 00:13:23.498 --- 10.0.0.2 ping statistics --- 00:13:23.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.498 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:23.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:23.498 00:13:23.498 --- 10.0.0.3 ping statistics --- 00:13:23.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.498 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:23.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:23.498 00:13:23.498 --- 10.0.0.1 ping statistics --- 00:13:23.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.498 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.498 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=78640 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 78640 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 78640 ']' 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.499 12:24:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:23.756 [2024-07-12 12:24:52.615461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:23.756 [2024-07-12 12:24:52.615543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.756 [2024-07-12 12:24:52.752728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.014 [2024-07-12 12:24:52.851520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.014 [2024-07-12 12:24:52.851583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.014 [2024-07-12 12:24:52.851595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.014 [2024-07-12 12:24:52.851604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.014 [2024-07-12 12:24:52.851611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.014 [2024-07-12 12:24:52.851726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.014 [2024-07-12 12:24:52.851916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.014 [2024-07-12 12:24:52.852684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.014 [2024-07-12 12:24:52.852728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.578 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.578 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:24.578 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.578 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.578 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 [2024-07-12 12:24:53.739646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 [2024-07-12 12:24:53.755938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 Malloc0 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.835 [2024-07-12 12:24:53.817908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78679 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78681 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.835 { 00:13:24.835 "params": { 00:13:24.835 "name": "Nvme$subsystem", 00:13:24.835 "trtype": "$TEST_TRANSPORT", 00:13:24.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.835 "adrfam": "ipv4", 00:13:24.835 "trsvcid": "$NVMF_PORT", 00:13:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.835 "hdgst": ${hdgst:-false}, 00:13:24.835 "ddgst": ${ddgst:-false} 00:13:24.835 }, 00:13:24.835 "method": "bdev_nvme_attach_controller" 00:13:24.835 } 00:13:24.835 EOF 00:13:24.835 )") 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78683 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78686 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.835 { 00:13:24.835 "params": { 00:13:24.835 "name": "Nvme$subsystem", 00:13:24.835 "trtype": "$TEST_TRANSPORT", 00:13:24.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.835 "adrfam": "ipv4", 00:13:24.835 "trsvcid": "$NVMF_PORT", 00:13:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.835 "hdgst": ${hdgst:-false}, 00:13:24.835 "ddgst": ${ddgst:-false} 00:13:24.835 }, 00:13:24.835 "method": "bdev_nvme_attach_controller" 00:13:24.835 } 00:13:24.835 EOF 00:13:24.835 )") 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.835 { 00:13:24.835 "params": { 00:13:24.835 "name": "Nvme$subsystem", 00:13:24.835 "trtype": "$TEST_TRANSPORT", 00:13:24.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.835 "adrfam": "ipv4", 00:13:24.835 "trsvcid": "$NVMF_PORT", 00:13:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.835 "hdgst": ${hdgst:-false}, 00:13:24.835 "ddgst": ${ddgst:-false} 00:13:24.835 }, 00:13:24.835 "method": "bdev_nvme_attach_controller" 00:13:24.835 } 00:13:24.835 EOF 00:13:24.835 )") 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.835 "params": { 00:13:24.835 "name": "Nvme1", 00:13:24.835 "trtype": "tcp", 00:13:24.835 "traddr": "10.0.0.2", 00:13:24.835 "adrfam": "ipv4", 00:13:24.835 "trsvcid": "4420", 00:13:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.835 "hdgst": false, 00:13:24.835 "ddgst": false 00:13:24.835 }, 00:13:24.835 "method": "bdev_nvme_attach_controller" 00:13:24.835 }' 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.835 { 00:13:24.835 "params": { 00:13:24.835 "name": "Nvme$subsystem", 00:13:24.835 "trtype": "$TEST_TRANSPORT", 00:13:24.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.835 "adrfam": "ipv4", 00:13:24.835 "trsvcid": "$NVMF_PORT", 00:13:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.835 "hdgst": ${hdgst:-false}, 00:13:24.835 "ddgst": ${ddgst:-false} 00:13:24.835 }, 00:13:24.835 "method": "bdev_nvme_attach_controller" 00:13:24.835 } 00:13:24.835 EOF 00:13:24.835 )") 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.835 "params": { 00:13:24.835 "name": "Nvme1", 00:13:24.835 "trtype": "tcp", 00:13:24.835 "traddr": "10.0.0.2", 00:13:24.835 "adrfam": "ipv4", 00:13:24.835 "trsvcid": "4420", 00:13:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.835 "hdgst": false, 00:13:24.835 "ddgst": false 00:13:24.835 }, 00:13:24.835 "method": "bdev_nvme_attach_controller" 00:13:24.835 }' 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.835 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.836 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.836 "params": { 00:13:24.836 "name": "Nvme1", 00:13:24.836 "trtype": "tcp", 00:13:24.836 "traddr": "10.0.0.2", 00:13:24.836 "adrfam": "ipv4", 00:13:24.836 "trsvcid": "4420", 00:13:24.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.836 "hdgst": false, 00:13:24.836 "ddgst": false 00:13:24.836 }, 00:13:24.836 "method": "bdev_nvme_attach_controller" 00:13:24.836 }' 00:13:24.836 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.836 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.836 "params": { 00:13:24.836 "name": "Nvme1", 00:13:24.836 "trtype": "tcp", 00:13:24.836 "traddr": "10.0.0.2", 00:13:24.836 "adrfam": "ipv4", 00:13:24.836 "trsvcid": "4420", 00:13:24.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.836 "hdgst": false, 00:13:24.836 "ddgst": false 00:13:24.836 }, 00:13:24.836 "method": "bdev_nvme_attach_controller" 00:13:24.836 }' 00:13:24.836 [2024-07-12 12:24:53.886988] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:24.836 [2024-07-12 12:24:53.887337] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:24.836 [2024-07-12 12:24:53.894825] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:24.836 [2024-07-12 12:24:53.895053] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:24.836 [2024-07-12 12:24:53.912663] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:24.836 [2024-07-12 12:24:53.913012] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:25.092 12:24:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78679 00:13:25.092 [2024-07-12 12:24:53.922741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:25.092 [2024-07-12 12:24:53.923003] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:25.092 [2024-07-12 12:24:54.096020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.092 [2024-07-12 12:24:54.172011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:25.092 [2024-07-12 12:24:54.172565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.348 [2024-07-12 12:24:54.235460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.348 [2024-07-12 12:24:54.243506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.348 [2024-07-12 12:24:54.251228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:25.348 [2024-07-12 12:24:54.299226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.348 [2024-07-12 12:24:54.316966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:25.348 [2024-07-12 12:24:54.319782] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.348 Running I/O for 1 seconds... 00:13:25.348 [2024-07-12 12:24:54.362809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.348 Running I/O for 1 seconds... 00:13:25.348 [2024-07-12 12:24:54.398529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:25.605 [2024-07-12 12:24:54.447346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.605 Running I/O for 1 seconds... 00:13:25.605 Running I/O for 1 seconds... 00:13:26.537 00:13:26.537 Latency(us) 00:13:26.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.537 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:26.537 Nvme1n1 : 1.03 6133.28 23.96 0.00 0.00 20397.04 7208.96 53858.68 00:13:26.537 =================================================================================================================== 00:13:26.537 Total : 6133.28 23.96 0.00 0.00 20397.04 7208.96 53858.68 00:13:26.537 00:13:26.537 Latency(us) 00:13:26.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.537 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:26.537 Nvme1n1 : 1.00 174037.34 679.83 0.00 0.00 732.72 353.75 1079.85 00:13:26.537 =================================================================================================================== 00:13:26.537 Total : 174037.34 679.83 0.00 0.00 732.72 353.75 1079.85 00:13:26.537 00:13:26.537 Latency(us) 00:13:26.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.537 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:26.537 Nvme1n1 : 1.02 6917.07 27.02 0.00 0.00 18352.84 10485.76 31695.59 00:13:26.537 =================================================================================================================== 00:13:26.537 Total : 6917.07 27.02 0.00 0.00 18352.84 10485.76 31695.59 00:13:26.537 00:13:26.537 Latency(us) 00:13:26.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.537 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:26.537 Nvme1n1 : 1.01 5790.97 22.62 0.00 0.00 22032.82 5659.93 59816.49 00:13:26.537 =================================================================================================================== 00:13:26.537 Total : 5790.97 22.62 0.00 0.00 22032.82 5659.93 59816.49 00:13:26.537 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78681 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78683 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78686 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.795 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.795 rmmod nvme_tcp 00:13:26.795 rmmod nvme_fabrics 00:13:26.795 rmmod nvme_keyring 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 78640 ']' 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 78640 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 78640 ']' 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 78640 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78640 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:27.053 killing process with pid 78640 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78640' 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 78640 00:13:27.053 12:24:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 78640 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:27.312 ************************************ 00:13:27.312 END TEST nvmf_bdev_io_wait 00:13:27.312 ************************************ 00:13:27.312 00:13:27.312 real 0m4.107s 00:13:27.312 user 0m17.937s 00:13:27.312 sys 0m2.206s 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.312 12:24:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 12:24:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.312 12:24:56 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:27.312 12:24:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.312 12:24:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.312 12:24:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.312 ************************************ 00:13:27.312 START TEST nvmf_queue_depth 00:13:27.312 ************************************ 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:27.312 * Looking for test storage... 00:13:27.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.312 12:24:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:27.313 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:27.571 Cannot find device "nvmf_tgt_br" 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.571 Cannot find device "nvmf_tgt_br2" 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:27.571 Cannot find device "nvmf_tgt_br" 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:27.571 Cannot find device "nvmf_tgt_br2" 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:27.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:27.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:27.571 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:27.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:27.830 00:13:27.830 --- 10.0.0.2 ping statistics --- 00:13:27.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.830 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:27.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:27.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:13:27.830 00:13:27.830 --- 10.0.0.3 ping statistics --- 00:13:27.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.830 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:27.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:27.830 00:13:27.830 --- 10.0.0.1 ping statistics --- 00:13:27.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.830 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=78913 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 78913 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 78913 ']' 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.830 12:24:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 [2024-07-12 12:24:56.769323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:27.830 [2024-07-12 12:24:56.769423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.088 [2024-07-12 12:24:56.913083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.088 [2024-07-12 12:24:57.014616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.088 [2024-07-12 12:24:57.014676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.088 [2024-07-12 12:24:57.014692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.088 [2024-07-12 12:24:57.014703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.088 [2024-07-12 12:24:57.014713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.088 [2024-07-12 12:24:57.014743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.088 [2024-07-12 12:24:57.072660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:28.654 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.654 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:28.654 12:24:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.654 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.654 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.912 12:24:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.913 [2024-07-12 12:24:57.776739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.913 Malloc0 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.913 [2024-07-12 12:24:57.838280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=78945 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 78945 /var/tmp/bdevperf.sock 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 78945 ']' 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.913 12:24:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.913 [2024-07-12 12:24:57.905142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:28.913 [2024-07-12 12:24:57.905254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78945 ] 00:13:29.171 [2024-07-12 12:24:58.043399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.171 [2024-07-12 12:24:58.161753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.171 [2024-07-12 12:24:58.237993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:30.110 NVMe0n1 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.110 12:24:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:30.110 Running I/O for 10 seconds... 00:13:42.310 00:13:42.310 Latency(us) 00:13:42.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.310 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:42.310 Verification LBA range: start 0x0 length 0x4000 00:13:42.310 NVMe0n1 : 10.09 7981.13 31.18 0.00 0.00 127470.84 38368.35 92941.96 00:13:42.310 =================================================================================================================== 00:13:42.310 Total : 7981.13 31.18 0.00 0.00 127470.84 38368.35 92941.96 00:13:42.310 0 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 78945 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 78945 ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 78945 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78945 00:13:42.310 killing process with pid 78945 00:13:42.310 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.310 00:13:42.310 Latency(us) 00:13:42.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.310 =================================================================================================================== 00:13:42.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78945' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 78945 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 78945 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:42.310 rmmod nvme_tcp 00:13:42.310 rmmod nvme_fabrics 00:13:42.310 rmmod nvme_keyring 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 78913 ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 78913 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 78913 ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 78913 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78913 00:13:42.310 killing process with pid 78913 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78913' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 78913 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 78913 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:42.310 00:13:42.310 real 0m13.597s 00:13:42.310 user 0m23.697s 00:13:42.310 sys 0m2.185s 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:42.310 12:25:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.310 ************************************ 00:13:42.310 END TEST nvmf_queue_depth 00:13:42.310 ************************************ 00:13:42.310 12:25:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:42.310 12:25:09 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:42.310 12:25:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:42.310 12:25:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.310 12:25:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:42.310 ************************************ 00:13:42.310 START TEST nvmf_target_multipath 00:13:42.310 ************************************ 00:13:42.310 12:25:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:42.310 * Looking for test storage... 00:13:42.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:42.310 Cannot find device "nvmf_tgt_br" 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.310 Cannot find device "nvmf_tgt_br2" 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:42.310 Cannot find device "nvmf_tgt_br" 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:42.310 Cannot find device "nvmf_tgt_br2" 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:42.310 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:42.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:13:42.310 00:13:42.310 --- 10.0.0.2 ping statistics --- 00:13:42.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.311 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:42.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:42.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:13:42.311 00:13:42.311 --- 10.0.0.3 ping statistics --- 00:13:42.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.311 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:42.311 00:13:42.311 --- 10.0.0.1 ping statistics --- 00:13:42.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.311 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=79270 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 79270 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 79270 ']' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.311 12:25:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:42.311 [2024-07-12 12:25:10.433149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:42.311 [2024-07-12 12:25:10.433244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.311 [2024-07-12 12:25:10.583603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.311 [2024-07-12 12:25:10.689122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.311 [2024-07-12 12:25:10.689185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.311 [2024-07-12 12:25:10.689200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.311 [2024-07-12 12:25:10.689211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.311 [2024-07-12 12:25:10.689221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.311 [2024-07-12 12:25:10.689347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.311 [2024-07-12 12:25:10.689457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.311 [2024-07-12 12:25:10.689829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.311 [2024-07-12 12:25:10.689836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.311 [2024-07-12 12:25:10.746547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.568 12:25:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:42.826 [2024-07-12 12:25:11.757370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.826 12:25:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:43.084 Malloc0 00:13:43.084 12:25:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:43.370 12:25:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:43.628 12:25:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.885 [2024-07-12 12:25:12.915014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.885 12:25:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:44.143 [2024-07-12 12:25:13.207321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:44.403 12:25:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=79364 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:46.945 12:25:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:46.945 [global] 00:13:46.945 thread=1 00:13:46.945 invalidate=1 00:13:46.945 rw=randrw 00:13:46.945 time_based=1 00:13:46.945 runtime=6 00:13:46.945 ioengine=libaio 00:13:46.945 direct=1 00:13:46.945 bs=4096 00:13:46.945 iodepth=128 00:13:46.945 norandommap=0 00:13:46.945 numjobs=1 00:13:46.945 00:13:46.945 verify_dump=1 00:13:46.945 verify_backlog=512 00:13:46.945 verify_state_save=0 00:13:46.945 do_verify=1 00:13:46.945 verify=crc32c-intel 00:13:46.945 [job0] 00:13:46.945 filename=/dev/nvme0n1 00:13:46.945 Could not set queue depth (nvme0n1) 00:13:46.945 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:46.945 fio-3.35 00:13:46.945 Starting 1 thread 00:13:47.511 12:25:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:47.770 12:25:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:48.029 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:48.287 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:48.853 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:48.854 12:25:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 79364 00:13:53.034 00:13:53.034 job0: (groupid=0, jobs=1): err= 0: pid=79386: Fri Jul 12 12:25:21 2024 00:13:53.034 read: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(243MiB/6002msec) 00:13:53.034 slat (usec): min=3, max=7133, avg=56.72, stdev=224.91 00:13:53.034 clat (usec): min=1456, max=16554, avg=8398.89, stdev=1548.94 00:13:53.034 lat (usec): min=1466, max=16588, avg=8455.61, stdev=1553.76 00:13:53.034 clat percentiles (usec): 00:13:53.034 | 1.00th=[ 4293], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7504], 00:13:53.034 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8356], 00:13:53.034 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[11863], 00:13:53.034 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14091], 99.95th=[14222], 00:13:53.034 | 99.99th=[14353] 00:13:53.034 bw ( KiB/s): min=11352, max=26008, per=52.03%, avg=21610.36, stdev=4861.58, samples=11 00:13:53.034 iops : min= 2838, max= 6502, avg=5402.55, stdev=1215.38, samples=11 00:13:53.034 write: IOPS=6149, BW=24.0MiB/s (25.2MB/s)(129MiB/5366msec); 0 zone resets 00:13:53.034 slat (usec): min=4, max=1629, avg=64.96, stdev=157.03 00:13:53.034 clat (usec): min=1594, max=14023, avg=7206.20, stdev=1293.08 00:13:53.034 lat (usec): min=1619, max=14046, avg=7271.17, stdev=1298.26 00:13:53.034 clat percentiles (usec): 00:13:53.034 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 5735], 20.00th=[ 6718], 00:13:53.034 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7504], 00:13:53.034 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8717], 00:13:53.034 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12780], 99.95th=[13042], 00:13:53.034 | 99.99th=[13960] 00:13:53.034 bw ( KiB/s): min=11808, max=25624, per=87.85%, avg=21611.00, stdev=4508.52, samples=11 00:13:53.034 iops : min= 2952, max= 6406, avg=5402.73, stdev=1127.12, samples=11 00:13:53.034 lat (msec) : 2=0.02%, 4=1.59%, 10=88.61%, 20=9.78% 00:13:53.034 cpu : usr=5.68%, sys=21.36%, ctx=5543, majf=0, minf=84 00:13:53.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:53.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.034 issued rwts: total=62319,32999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.034 00:13:53.034 Run status group 0 (all jobs): 00:13:53.034 READ: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=243MiB (255MB), run=6002-6002msec 00:13:53.034 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=129MiB (135MB), run=5366-5366msec 00:13:53.034 00:13:53.034 Disk stats (read/write): 00:13:53.034 nvme0n1: ios=61322/32487, merge=0/0, ticks=495279/220341, in_queue=715620, util=98.66% 00:13:53.034 12:25:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:13:53.034 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:53.291 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:53.291 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:53.291 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:53.291 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:53.291 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:53.291 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=79466 00:13:53.292 12:25:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:53.550 [global] 00:13:53.550 thread=1 00:13:53.550 invalidate=1 00:13:53.550 rw=randrw 00:13:53.550 time_based=1 00:13:53.550 runtime=6 00:13:53.550 ioengine=libaio 00:13:53.550 direct=1 00:13:53.550 bs=4096 00:13:53.550 iodepth=128 00:13:53.550 norandommap=0 00:13:53.550 numjobs=1 00:13:53.550 00:13:53.550 verify_dump=1 00:13:53.550 verify_backlog=512 00:13:53.550 verify_state_save=0 00:13:53.550 do_verify=1 00:13:53.550 verify=crc32c-intel 00:13:53.550 [job0] 00:13:53.550 filename=/dev/nvme0n1 00:13:53.550 Could not set queue depth (nvme0n1) 00:13:53.550 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.550 fio-3.35 00:13:53.550 Starting 1 thread 00:13:54.503 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:54.760 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:55.018 12:25:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:55.276 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:55.534 12:25:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 79466 00:13:59.711 00:13:59.711 job0: (groupid=0, jobs=1): err= 0: pid=79487: Fri Jul 12 12:25:28 2024 00:13:59.711 read: IOPS=11.4k, BW=44.7MiB/s (46.9MB/s)(268MiB/6004msec) 00:13:59.711 slat (usec): min=7, max=7032, avg=43.97, stdev=190.38 00:13:59.711 clat (usec): min=313, max=17550, avg=7616.00, stdev=2115.41 00:13:59.711 lat (usec): min=328, max=17578, avg=7659.97, stdev=2130.09 00:13:59.711 clat percentiles (usec): 00:13:59.711 | 1.00th=[ 2573], 5.00th=[ 3752], 10.00th=[ 4621], 20.00th=[ 5997], 00:13:59.711 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8094], 00:13:59.711 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[10028], 95.00th=[11338], 00:13:59.711 | 99.00th=[12911], 99.50th=[13698], 99.90th=[16188], 99.95th=[17171], 00:13:59.711 | 99.99th=[17433] 00:13:59.711 bw ( KiB/s): min=12456, max=41936, per=54.27%, avg=24851.64, stdev=8733.99, samples=11 00:13:59.711 iops : min= 3114, max=10484, avg=6212.91, stdev=2183.50, samples=11 00:13:59.711 write: IOPS=6784, BW=26.5MiB/s (27.8MB/s)(144MiB/5431msec); 0 zone resets 00:13:59.711 slat (usec): min=15, max=2063, avg=56.51, stdev=133.37 00:13:59.711 clat (usec): min=1253, max=15988, avg=6474.97, stdev=1814.82 00:13:59.711 lat (usec): min=1278, max=16016, avg=6531.47, stdev=1829.24 00:13:59.711 clat percentiles (usec): 00:13:59.711 | 1.00th=[ 2540], 5.00th=[ 3294], 10.00th=[ 3752], 20.00th=[ 4490], 00:13:59.711 | 30.00th=[ 5538], 40.00th=[ 6652], 50.00th=[ 7046], 60.00th=[ 7308], 00:13:59.711 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8225], 95.00th=[ 8848], 00:13:59.711 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12780], 99.95th=[13042], 00:13:59.711 | 99.99th=[14222] 00:13:59.711 bw ( KiB/s): min=12760, max=40960, per=91.53%, avg=24839.27, stdev=8529.09, samples=11 00:13:59.711 iops : min= 3190, max=10240, avg=6209.82, stdev=2132.27, samples=11 00:13:59.711 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:13:59.711 lat (msec) : 2=0.21%, 4=8.41%, 10=83.94%, 20=7.38% 00:13:59.711 cpu : usr=6.45%, sys=24.62%, ctx=6707, majf=0, minf=96 00:13:59.711 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:59.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:59.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:59.711 issued rwts: total=68728,36848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:59.711 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:59.711 00:13:59.711 Run status group 0 (all jobs): 00:13:59.711 READ: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=268MiB (282MB), run=6004-6004msec 00:13:59.711 WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=144MiB (151MB), run=5431-5431msec 00:13:59.711 00:13:59.711 Disk stats (read/write): 00:13:59.711 nvme0n1: ios=67855/36296, merge=0/0, ticks=486340/213869, in_queue=700209, util=98.55% 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:59.711 12:25:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:13:59.712 12:25:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.002 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.002 rmmod nvme_tcp 00:14:00.002 rmmod nvme_fabrics 00:14:00.002 rmmod nvme_keyring 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 79270 ']' 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 79270 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 79270 ']' 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 79270 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79270 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:00.259 killing process with pid 79270 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79270' 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 79270 00:14:00.259 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 79270 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:00.517 00:14:00.517 real 0m19.482s 00:14:00.517 user 1m13.562s 00:14:00.517 sys 0m9.667s 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.517 12:25:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:00.517 ************************************ 00:14:00.517 END TEST nvmf_target_multipath 00:14:00.517 ************************************ 00:14:00.517 12:25:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.517 12:25:29 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:00.517 12:25:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.517 12:25:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.517 12:25:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.517 ************************************ 00:14:00.517 START TEST nvmf_zcopy 00:14:00.517 ************************************ 00:14:00.517 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:00.517 * Looking for test storage... 00:14:00.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.517 12:25:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.517 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:00.517 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:00.518 Cannot find device "nvmf_tgt_br" 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:14:00.518 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.776 Cannot find device "nvmf_tgt_br2" 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:00.776 Cannot find device "nvmf_tgt_br" 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:00.776 Cannot find device "nvmf_tgt_br2" 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:00.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:00.776 00:14:00.776 --- 10.0.0.2 ping statistics --- 00:14:00.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.776 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:00.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:14:00.776 00:14:00.776 --- 10.0.0.3 ping statistics --- 00:14:00.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.776 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:00.776 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:01.035 00:14:01.035 --- 10.0.0.1 ping statistics --- 00:14:01.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.035 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=79735 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 79735 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 79735 ']' 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.035 12:25:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.035 [2024-07-12 12:25:29.957822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:01.035 [2024-07-12 12:25:29.957967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.035 [2024-07-12 12:25:30.106940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.293 [2024-07-12 12:25:30.275368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.293 [2024-07-12 12:25:30.275498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.293 [2024-07-12 12:25:30.275516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.293 [2024-07-12 12:25:30.275529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.293 [2024-07-12 12:25:30.275540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.293 [2024-07-12 12:25:30.275621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.293 [2024-07-12 12:25:30.374774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:02.226 12:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:02.227 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.227 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 [2024-07-12 12:25:30.993902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.227 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.227 12:25:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:02.227 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.227 12:25:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 [2024-07-12 12:25:31.009996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 malloc0 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:02.227 { 00:14:02.227 "params": { 00:14:02.227 "name": "Nvme$subsystem", 00:14:02.227 "trtype": "$TEST_TRANSPORT", 00:14:02.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.227 "adrfam": "ipv4", 00:14:02.227 "trsvcid": "$NVMF_PORT", 00:14:02.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.227 "hdgst": ${hdgst:-false}, 00:14:02.227 "ddgst": ${ddgst:-false} 00:14:02.227 }, 00:14:02.227 "method": "bdev_nvme_attach_controller" 00:14:02.227 } 00:14:02.227 EOF 00:14:02.227 )") 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:02.227 12:25:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:02.227 "params": { 00:14:02.227 "name": "Nvme1", 00:14:02.227 "trtype": "tcp", 00:14:02.227 "traddr": "10.0.0.2", 00:14:02.227 "adrfam": "ipv4", 00:14:02.227 "trsvcid": "4420", 00:14:02.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.227 "hdgst": false, 00:14:02.227 "ddgst": false 00:14:02.227 }, 00:14:02.227 "method": "bdev_nvme_attach_controller" 00:14:02.227 }' 00:14:02.227 [2024-07-12 12:25:31.094048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:02.227 [2024-07-12 12:25:31.094126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79768 ] 00:14:02.227 [2024-07-12 12:25:31.229264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.486 [2024-07-12 12:25:31.314819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.486 [2024-07-12 12:25:31.377623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.486 Running I/O for 10 seconds... 00:14:12.484 00:14:12.484 Latency(us) 00:14:12.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:12.484 Verification LBA range: start 0x0 length 0x1000 00:14:12.484 Nvme1n1 : 10.02 6078.70 47.49 0.00 0.00 20990.15 2457.60 31218.97 00:14:12.484 =================================================================================================================== 00:14:12.484 Total : 6078.70 47.49 0.00 0.00 20990.15 2457.60 31218.97 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=79889 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:12.742 { 00:14:12.742 "params": { 00:14:12.742 "name": "Nvme$subsystem", 00:14:12.742 "trtype": "$TEST_TRANSPORT", 00:14:12.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.742 "adrfam": "ipv4", 00:14:12.742 "trsvcid": "$NVMF_PORT", 00:14:12.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.742 "hdgst": ${hdgst:-false}, 00:14:12.742 "ddgst": ${ddgst:-false} 00:14:12.742 }, 00:14:12.742 "method": "bdev_nvme_attach_controller" 00:14:12.742 } 00:14:12.742 EOF 00:14:12.742 )") 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:12.742 [2024-07-12 12:25:41.723267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.723315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:12.742 12:25:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:12.742 "params": { 00:14:12.742 "name": "Nvme1", 00:14:12.742 "trtype": "tcp", 00:14:12.742 "traddr": "10.0.0.2", 00:14:12.742 "adrfam": "ipv4", 00:14:12.742 "trsvcid": "4420", 00:14:12.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.742 "hdgst": false, 00:14:12.742 "ddgst": false 00:14:12.742 }, 00:14:12.742 "method": "bdev_nvme_attach_controller" 00:14:12.742 }' 00:14:12.742 [2024-07-12 12:25:41.731225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.731250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.743225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.743250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.751225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.751249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.763233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.763259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.771229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.771253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.772556] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:12.742 [2024-07-12 12:25:41.772655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79889 ] 00:14:12.742 [2024-07-12 12:25:41.779233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.779256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.787239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.787264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.795248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.795271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.742 [2024-07-12 12:25:41.807269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.742 [2024-07-12 12:25:41.807321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.743 [2024-07-12 12:25:41.819261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.743 [2024-07-12 12:25:41.819286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.831263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.831311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.843272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.843299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.855278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.855323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.867288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.867336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.879293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.879338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.891282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.891312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.903336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.903360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.906522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.001 [2024-07-12 12:25:41.915298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.915347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.927308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.927335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.939294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.939324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.951333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.951357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.963350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.963391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.975341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.975366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.987309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.987349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:41.992210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.001 [2024-07-12 12:25:41.999313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:41.999353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.011329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.011358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.023344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.023375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.035366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.035395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.047357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.047385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.057135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.001 [2024-07-12 12:25:42.059338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.059362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.071351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.071381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.001 [2024-07-12 12:25:42.083351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.001 [2024-07-12 12:25:42.083378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.095346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.095370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.107387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.107418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.119393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.119422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.131439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.131473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.143432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.143463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.155426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.155454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.167439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.167469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 Running I/O for 5 seconds... 00:14:13.259 [2024-07-12 12:25:42.179443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.179470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.196847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.196891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.212620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.212652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.222296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.222358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.238164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.238195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.252960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.252990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.268136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.268195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.277412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.277457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.293330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.293361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.302941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.302974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.319326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.319356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.259 [2024-07-12 12:25:42.336529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.259 [2024-07-12 12:25:42.336561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.354376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.354423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.368428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.368460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.385325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.385364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.400705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.400736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.409766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.409807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.425176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.425216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.440828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.440861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.457280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.457313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.474212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.474248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.490367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.490398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.508892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.508922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.523238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.523271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.540328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.540362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.556725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.556758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.574110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.574142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.515 [2024-07-12 12:25:42.590835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.515 [2024-07-12 12:25:42.590870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.607217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.607264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.625688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.625733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.640410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.640449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.651908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.651942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.667759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.667810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.686658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.686692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.701417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.701458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.712954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.712985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.729821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.729852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.745389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.745420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.760962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.760992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.779359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.779389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.793471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.793503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.808892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.808929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.827619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.827654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.842366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.842398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.771 [2024-07-12 12:25:42.851354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.771 [2024-07-12 12:25:42.851387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.867297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.867338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.876952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.876983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.888039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.888071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.898449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.898481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.913244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.913276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.930601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.930632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.946817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.946874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.956533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.956577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.973203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.973250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:42.990411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:42.990457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.006285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.006315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.023694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.023751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.039936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.039963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.057377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.057406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.072153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.072185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.087741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.087786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.039 [2024-07-12 12:25:43.104758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.039 [2024-07-12 12:25:43.104823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.121679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.121711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.138654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.138684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.156315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.156362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.172329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.172359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.189533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.189574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.206121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.206152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.223413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.223444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.238921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.238971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.254547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.254578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.273395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.273425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.288318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.288365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.307407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.307438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.321976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.322008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.338763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.338811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.355535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.355578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.297 [2024-07-12 12:25:43.372202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.297 [2024-07-12 12:25:43.372240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.389724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.389763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.404254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.404290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.420055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.420094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.438886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.438919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.453831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.453862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.469776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.469825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.488606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.488641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.502682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.502713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.518057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.518089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.527558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.527590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.543097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.543129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.560100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.560134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.576862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.576892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.593932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.593975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.610611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.610644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.555 [2024-07-12 12:25:43.628234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.555 [2024-07-12 12:25:43.628266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.645238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.645273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.662867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.662900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.678204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.678236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.687033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.687064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.703932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.703979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.720388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.720432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.736713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.736758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.753237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.753268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.769454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.769499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.785774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.785851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.802745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.802789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.818284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.818330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.834503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.834535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.851516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.851548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.867438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.867469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.878578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.878622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.895720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.895754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.843 [2024-07-12 12:25:43.912221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.843 [2024-07-12 12:25:43.912253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:43.928710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:43.928741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:43.945181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:43.945227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:43.962358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:43.962389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:43.979985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:43.980016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:43.996704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:43.996736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.013919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.013951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.029848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.029893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.046890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.046919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.063924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.063969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.080103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.080148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.096005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.096034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.112441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.112485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.121900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.121945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.137124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.137169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.153041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.153086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.103 [2024-07-12 12:25:44.171150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.103 [2024-07-12 12:25:44.171195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.360 [2024-07-12 12:25:44.186052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.360 [2024-07-12 12:25:44.186082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.360 [2024-07-12 12:25:44.198220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.360 [2024-07-12 12:25:44.198264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.360 [2024-07-12 12:25:44.214972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.215017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.229916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.229945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.247181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.247213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.261493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.261541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.279264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.279296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.293634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.293681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.308984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.309028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.326295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.326342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.343351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.343382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.360254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.360286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.376845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.376875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.392131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.392163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.409283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.409331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.424111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.424158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.361 [2024-07-12 12:25:44.439263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.361 [2024-07-12 12:25:44.439297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.448438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.448490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.464444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.464492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.480569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.480617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.498011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.498060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.512221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.512267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.527292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.527349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.536818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.536851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.552130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.552163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.569259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.569290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.587050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.587081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.603171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.603203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.621788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.621843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.635514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.635545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.650145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.650177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.665201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.665232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.674057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.674102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.618 [2024-07-12 12:25:44.689903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.618 [2024-07-12 12:25:44.689949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.705098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.705128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.721815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.721858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.739872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.739915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.754384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.754430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.771743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.771775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.785870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.785916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.801796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.801855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.820777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.820835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.835183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.835214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.844713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.844757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.856411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.856455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.871899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.871943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.890226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.890259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.904840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.904885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.916076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.916125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.931976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.932021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.876 [2024-07-12 12:25:44.949149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.876 [2024-07-12 12:25:44.949181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:44.965139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:44.965184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:44.981041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:44.981070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:44.998230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:44.998277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:45.012851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:45.012895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:45.029368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:45.029397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:45.045712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:45.045756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:45.063342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.134 [2024-07-12 12:25:45.063373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.134 [2024-07-12 12:25:45.079174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.079206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.096092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.096120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.112904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.112954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.129766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.129813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.145929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.145960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.162904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.162935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.178967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.179013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.195414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.195444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.135 [2024-07-12 12:25:45.213989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.135 [2024-07-12 12:25:45.214016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.392 [2024-07-12 12:25:45.228574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.228618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.243656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.243701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.252767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.252838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.268618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.268650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.284195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.284255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.302603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.302648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.317741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.317769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.333006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.333035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.351545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.351576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.366109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.366147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.382636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.382682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.398497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.398541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.416381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.416424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.432993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.433037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.449266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.449310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.393 [2024-07-12 12:25:45.465946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.393 [2024-07-12 12:25:45.466005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.481931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.481975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.499177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.499218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.515673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.515718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.531210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.531254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.540884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.540929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.556523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.556568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.565769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.565842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.581462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.581508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.597405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.597434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.614959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.615003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.630894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.630924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.648173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.648217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.665859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.665902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.682099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.682143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.699242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.699288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.716390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.716434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.651 [2024-07-12 12:25:45.733140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.651 [2024-07-12 12:25:45.733170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.747299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.747356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.762592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.762636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.774526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.774572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.790623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.790667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.807559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.807589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.824534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.824578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.840629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.840674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.859190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.859221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.873761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.873838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.883315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.883344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.898751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.898794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.915764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.915836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.932965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.933019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.949621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.949665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.965972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.966002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.909 [2024-07-12 12:25:45.982232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.909 [2024-07-12 12:25:45.982276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:45.998486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:45.998531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.017209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.017254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.031570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.031616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.047453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.047500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.065718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.065761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.081057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.081088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.098420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.098466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.114421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.114466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.131906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.131937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.148154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.148185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.165127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.165161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.182663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.182693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.197378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.197412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.213152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.213214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.230471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.230517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.168 [2024-07-12 12:25:46.247240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.168 [2024-07-12 12:25:46.247270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.264455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.264501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.280940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.280970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.298270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.298314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.314557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.314601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.331138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.331167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.348708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.348738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.364349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.364380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.383340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.383373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.397957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.397987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.408001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.408030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.422468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.422513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.438209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.438269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.456682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.456713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.471586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.471616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.488324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.488369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.427 [2024-07-12 12:25:46.505375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.427 [2024-07-12 12:25:46.505406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.522411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.522443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.538592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.538638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.555281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.555334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.571230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.571276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.589788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.589843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.605009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.605039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.621782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.621854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.637154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.637197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.652675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.652707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.662055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.662086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.678374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.678405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.696047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.696078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.712459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.712505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.729595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.729628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.746254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.746286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.686 [2024-07-12 12:25:46.761619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.686 [2024-07-12 12:25:46.761650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.778610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.778656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.793606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.793651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.809751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.809796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.825920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.825982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.842671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.842716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.859981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.860012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.876613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.876661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.893703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.893750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.910242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.910287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.927494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.927525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.942421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.942453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.954331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.954376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.969314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.969360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:46.984640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:46.984685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:47.001861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:47.001891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.945 [2024-07-12 12:25:47.018908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.945 [2024-07-12 12:25:47.018936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.037459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.037506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.052318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.052378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.069361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.069406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.087189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.087220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.101480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.101525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.118500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.118545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.135156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.135187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.152290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.152335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.168964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.168995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.183487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.183519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 00:14:18.204 Latency(us) 00:14:18.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.204 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:18.204 Nvme1n1 : 5.01 12008.11 93.81 0.00 0.00 10644.83 4468.36 20256.58 00:14:18.204 =================================================================================================================== 00:14:18.204 Total : 12008.11 93.81 0.00 0.00 10644.83 4468.36 20256.58 00:14:18.204 [2024-07-12 12:25:47.192752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.192795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.204750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.204794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.216770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.216848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.228773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.228831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.240777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.240834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.252786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.204 [2024-07-12 12:25:47.252829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.204 [2024-07-12 12:25:47.264781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.205 [2024-07-12 12:25:47.264854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.205 [2024-07-12 12:25:47.276783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.205 [2024-07-12 12:25:47.276855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.288795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.288840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.300812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.300858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.312797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.312841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.324785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.324850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.336838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.336885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.348791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.348862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.360824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.360875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.372809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.372849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 [2024-07-12 12:25:47.384789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:18.463 [2024-07-12 12:25:47.384852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:18.463 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79889) - No such process 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 79889 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:18.463 delay0 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.463 12:25:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:18.721 [2024-07-12 12:25:47.587504] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:25.327 Initializing NVMe Controllers 00:14:25.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:25.327 Initialization complete. Launching workers. 00:14:25.327 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:14:25.327 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 381, failed to submit 33 00:14:25.327 success 250, unsuccess 131, failed 0 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.327 rmmod nvme_tcp 00:14:25.327 rmmod nvme_fabrics 00:14:25.327 rmmod nvme_keyring 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 79735 ']' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 79735 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 79735 ']' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 79735 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79735 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:25.327 killing process with pid 79735 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79735' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 79735 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 79735 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.327 12:25:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.327 12:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.327 12:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.327 12:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.327 12:25:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:25.327 00:14:25.327 real 0m24.587s 00:14:25.327 user 0m40.636s 00:14:25.327 sys 0m6.609s 00:14:25.327 12:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.327 12:25:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:25.327 ************************************ 00:14:25.327 END TEST nvmf_zcopy 00:14:25.327 ************************************ 00:14:25.327 12:25:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:25.327 12:25:54 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:25.327 12:25:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.327 12:25:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.327 12:25:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.327 ************************************ 00:14:25.327 START TEST nvmf_nmic 00:14:25.327 ************************************ 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:25.327 * Looking for test storage... 00:14:25.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.327 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:25.328 Cannot find device "nvmf_tgt_br" 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:25.328 Cannot find device "nvmf_tgt_br2" 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:25.328 Cannot find device "nvmf_tgt_br" 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:25.328 Cannot find device "nvmf_tgt_br2" 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:25.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:25.328 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:25.328 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:25.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:14:25.587 00:14:25.587 --- 10.0.0.2 ping statistics --- 00:14:25.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.587 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:25.587 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:25.587 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:25.587 00:14:25.587 --- 10.0.0.3 ping statistics --- 00:14:25.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.587 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:25.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:25.587 00:14:25.587 --- 10.0.0.1 ping statistics --- 00:14:25.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.587 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=80215 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 80215 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 80215 ']' 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.587 12:25:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 [2024-07-12 12:25:54.627095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:25.587 [2024-07-12 12:25:54.627203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.845 [2024-07-12 12:25:54.771612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.845 [2024-07-12 12:25:54.880629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.845 [2024-07-12 12:25:54.880958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.845 [2024-07-12 12:25:54.881060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.845 [2024-07-12 12:25:54.881146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.845 [2024-07-12 12:25:54.881222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.845 [2024-07-12 12:25:54.881390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.845 [2024-07-12 12:25:54.881456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.845 [2024-07-12 12:25:54.881714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.845 [2024-07-12 12:25:54.881723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.103 [2024-07-12 12:25:54.940656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.669 [2024-07-12 12:25:55.717473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.669 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 Malloc0 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 [2024-07-12 12:25:55.792160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:26.927 test case1: single bdev can't be used in multiple subsystems 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 [2024-07-12 12:25:55.815999] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:26.927 [2024-07-12 12:25:55.816060] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:26.927 [2024-07-12 12:25:55.816103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.927 request: 00:14:26.927 { 00:14:26.927 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:26.927 "namespace": { 00:14:26.927 "bdev_name": "Malloc0", 00:14:26.927 "no_auto_visible": false 00:14:26.927 }, 00:14:26.927 "method": "nvmf_subsystem_add_ns", 00:14:26.927 "req_id": 1 00:14:26.927 } 00:14:26.927 Got JSON-RPC error response 00:14:26.927 response: 00:14:26.927 { 00:14:26.927 "code": -32602, 00:14:26.927 "message": "Invalid parameters" 00:14:26.927 } 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:26.927 Adding namespace failed - expected result. 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:26.927 test case2: host connect to nvmf target in multiple paths 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.927 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:26.927 [2024-07-12 12:25:55.828171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:26.928 12:25:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.928 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.928 12:25:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:27.185 12:25:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.185 12:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.185 12:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.185 12:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.185 12:25:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:29.085 12:25:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:29.085 [global] 00:14:29.085 thread=1 00:14:29.085 invalidate=1 00:14:29.085 rw=write 00:14:29.085 time_based=1 00:14:29.085 runtime=1 00:14:29.085 ioengine=libaio 00:14:29.085 direct=1 00:14:29.085 bs=4096 00:14:29.085 iodepth=1 00:14:29.085 norandommap=0 00:14:29.085 numjobs=1 00:14:29.085 00:14:29.085 verify_dump=1 00:14:29.085 verify_backlog=512 00:14:29.085 verify_state_save=0 00:14:29.085 do_verify=1 00:14:29.085 verify=crc32c-intel 00:14:29.085 [job0] 00:14:29.085 filename=/dev/nvme0n1 00:14:29.085 Could not set queue depth (nvme0n1) 00:14:29.342 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.342 fio-3.35 00:14:29.342 Starting 1 thread 00:14:30.715 00:14:30.715 job0: (groupid=0, jobs=1): err= 0: pid=80306: Fri Jul 12 12:25:59 2024 00:14:30.715 read: IOPS=3039, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:14:30.715 slat (nsec): min=11557, max=62039, avg=14886.23, stdev=4045.64 00:14:30.715 clat (usec): min=137, max=296, avg=179.75, stdev=16.67 00:14:30.715 lat (usec): min=149, max=321, avg=194.64, stdev=17.37 00:14:30.715 clat percentiles (usec): 00:14:30.715 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:14:30.715 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:14:30.715 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 210], 00:14:30.715 | 99.00th=[ 229], 99.50th=[ 239], 99.90th=[ 253], 99.95th=[ 285], 00:14:30.715 | 99.99th=[ 297] 00:14:30.715 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:30.715 slat (usec): min=15, max=104, avg=21.35, stdev= 5.00 00:14:30.715 clat (usec): min=85, max=238, avg=107.98, stdev=11.86 00:14:30.715 lat (usec): min=104, max=342, avg=129.33, stdev=13.73 00:14:30.715 clat percentiles (usec): 00:14:30.715 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:14:30.715 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:14:30.715 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 131], 00:14:30.715 | 99.00th=[ 143], 99.50th=[ 151], 99.90th=[ 184], 99.95th=[ 198], 00:14:30.715 | 99.99th=[ 239] 00:14:30.715 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:30.715 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:30.715 lat (usec) : 100=12.31%, 250=87.62%, 500=0.07% 00:14:30.715 cpu : usr=3.00%, sys=8.10%, ctx=6115, majf=0, minf=2 00:14:30.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.715 issued rwts: total=3043,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:30.715 00:14:30.715 Run status group 0 (all jobs): 00:14:30.715 READ: bw=11.9MiB/s (12.5MB/s), 11.9MiB/s-11.9MiB/s (12.5MB/s-12.5MB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:14:30.715 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:14:30.715 00:14:30.715 Disk stats (read/write): 00:14:30.715 nvme0n1: ios=2609/3015, merge=0/0, ticks=492/338, in_queue=830, util=91.27% 00:14:30.715 12:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.716 rmmod nvme_tcp 00:14:30.716 rmmod nvme_fabrics 00:14:30.716 rmmod nvme_keyring 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 80215 ']' 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 80215 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 80215 ']' 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 80215 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80215 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.716 killing process with pid 80215 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80215' 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 80215 00:14:30.716 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 80215 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:30.974 00:14:30.974 real 0m5.844s 00:14:30.974 user 0m18.734s 00:14:30.974 sys 0m2.324s 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.974 12:25:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:30.974 ************************************ 00:14:30.974 END TEST nvmf_nmic 00:14:30.974 ************************************ 00:14:30.974 12:25:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.974 12:25:59 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:30.974 12:25:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.974 12:25:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.974 12:25:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.974 ************************************ 00:14:30.974 START TEST nvmf_fio_target 00:14:30.974 ************************************ 00:14:30.974 12:25:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:31.234 * Looking for test storage... 00:14:31.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:31.234 Cannot find device "nvmf_tgt_br" 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:14:31.234 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.235 Cannot find device "nvmf_tgt_br2" 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:31.235 Cannot find device "nvmf_tgt_br" 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:31.235 Cannot find device "nvmf_tgt_br2" 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:31.235 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:31.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:31.495 00:14:31.495 --- 10.0.0.2 ping statistics --- 00:14:31.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.495 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:31.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:31.495 00:14:31.495 --- 10.0.0.3 ping statistics --- 00:14:31.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.495 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:31.495 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:31.496 00:14:31.496 --- 10.0.0.1 ping statistics --- 00:14:31.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.496 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=80488 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 80488 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 80488 ']' 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.496 12:26:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.496 [2024-07-12 12:26:00.501670] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:31.496 [2024-07-12 12:26:00.501753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.759 [2024-07-12 12:26:00.638000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.759 [2024-07-12 12:26:00.730604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.759 [2024-07-12 12:26:00.730673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.759 [2024-07-12 12:26:00.730700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.759 [2024-07-12 12:26:00.730709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.759 [2024-07-12 12:26:00.730716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.759 [2024-07-12 12:26:00.730880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.759 [2024-07-12 12:26:00.731572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.759 [2024-07-12 12:26:00.731659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.759 [2024-07-12 12:26:00.731669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.759 [2024-07-12 12:26:00.788458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.689 12:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:32.946 [2024-07-12 12:26:01.785604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.946 12:26:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.202 12:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:33.202 12:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.460 12:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:33.461 12:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.724 12:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:33.724 12:26:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.981 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:33.981 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:34.238 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:34.802 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:34.802 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:34.802 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:34.802 12:26:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:35.367 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:35.367 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:35.624 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:35.624 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:35.624 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.224 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:36.224 12:26:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:36.224 12:26:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.481 [2024-07-12 12:26:05.467715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.481 12:26:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:36.738 12:26:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:36.995 12:26:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.252 12:26:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:37.252 12:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:37.252 12:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.252 12:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:37.252 12:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:37.252 12:26:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:39.148 12:26:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:39.148 [global] 00:14:39.148 thread=1 00:14:39.148 invalidate=1 00:14:39.148 rw=write 00:14:39.148 time_based=1 00:14:39.148 runtime=1 00:14:39.148 ioengine=libaio 00:14:39.148 direct=1 00:14:39.148 bs=4096 00:14:39.148 iodepth=1 00:14:39.148 norandommap=0 00:14:39.148 numjobs=1 00:14:39.148 00:14:39.148 verify_dump=1 00:14:39.148 verify_backlog=512 00:14:39.148 verify_state_save=0 00:14:39.148 do_verify=1 00:14:39.148 verify=crc32c-intel 00:14:39.148 [job0] 00:14:39.148 filename=/dev/nvme0n1 00:14:39.148 [job1] 00:14:39.148 filename=/dev/nvme0n2 00:14:39.148 [job2] 00:14:39.148 filename=/dev/nvme0n3 00:14:39.148 [job3] 00:14:39.148 filename=/dev/nvme0n4 00:14:39.148 Could not set queue depth (nvme0n1) 00:14:39.148 Could not set queue depth (nvme0n2) 00:14:39.148 Could not set queue depth (nvme0n3) 00:14:39.148 Could not set queue depth (nvme0n4) 00:14:39.405 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:39.405 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:39.405 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:39.405 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:39.405 fio-3.35 00:14:39.405 Starting 4 threads 00:14:40.777 00:14:40.777 job0: (groupid=0, jobs=1): err= 0: pid=80680: Fri Jul 12 12:26:09 2024 00:14:40.777 read: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:14:40.777 slat (nsec): min=12777, max=60230, avg=16875.20, stdev=4098.18 00:14:40.777 clat (usec): min=140, max=408, avg=168.25, stdev=12.71 00:14:40.777 lat (usec): min=154, max=422, avg=185.13, stdev=13.47 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:14:40.777 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:14:40.777 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:14:40.777 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 229], 99.95th=[ 330], 00:14:40.777 | 99.99th=[ 408] 00:14:40.777 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:40.777 slat (nsec): min=15061, max=86902, avg=23541.43, stdev=5370.75 00:14:40.777 clat (usec): min=92, max=1576, avg=121.78, stdev=28.20 00:14:40.777 lat (usec): min=116, max=1597, avg=145.32, stdev=28.78 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 103], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:14:40.777 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 123], 00:14:40.777 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 141], 00:14:40.777 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 223], 00:14:40.777 | 99.99th=[ 1582] 00:14:40.777 bw ( KiB/s): min=12288, max=12288, per=29.09%, avg=12288.00, stdev= 0.00, samples=1 00:14:40.777 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:40.777 lat (usec) : 100=0.10%, 250=99.85%, 500=0.03% 00:14:40.777 lat (msec) : 2=0.02% 00:14:40.777 cpu : usr=3.30%, sys=9.00%, ctx=6000, majf=0, minf=3 00:14:40.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.777 issued rwts: total=2928,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.777 job1: (groupid=0, jobs=1): err= 0: pid=80681: Fri Jul 12 12:26:09 2024 00:14:40.777 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:14:40.777 slat (nsec): min=8852, max=49655, avg=15149.25, stdev=3766.66 00:14:40.777 clat (usec): min=140, max=432, avg=189.53, stdev=52.08 00:14:40.777 lat (usec): min=154, max=446, avg=204.68, stdev=51.31 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:14:40.777 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:14:40.777 | 70.00th=[ 182], 80.00th=[ 221], 90.00th=[ 262], 95.00th=[ 330], 00:14:40.777 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 388], 99.95th=[ 424], 00:14:40.777 | 99.99th=[ 433] 00:14:40.777 write: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:14:40.777 slat (usec): min=14, max=126, avg=22.25, stdev= 5.80 00:14:40.777 clat (usec): min=93, max=674, avg=132.04, stdev=29.50 00:14:40.777 lat (usec): min=112, max=694, avg=154.28, stdev=29.70 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 99], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 114], 00:14:40.777 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 125], 60.00th=[ 129], 00:14:40.777 | 70.00th=[ 135], 80.00th=[ 147], 90.00th=[ 172], 95.00th=[ 186], 00:14:40.777 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 281], 99.95th=[ 676], 00:14:40.777 | 99.99th=[ 676] 00:14:40.777 bw ( KiB/s): min=12288, max=12288, per=29.09%, avg=12288.00, stdev= 0.00, samples=1 00:14:40.777 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:40.777 lat (usec) : 100=0.72%, 250=92.50%, 500=6.75%, 750=0.04% 00:14:40.777 cpu : usr=2.50%, sys=8.20%, ctx=5585, majf=0, minf=9 00:14:40.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.777 issued rwts: total=2560,3025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.777 job2: (groupid=0, jobs=1): err= 0: pid=80682: Fri Jul 12 12:26:09 2024 00:14:40.777 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:14:40.777 slat (nsec): min=12851, max=83342, avg=17247.08, stdev=4685.24 00:14:40.777 clat (usec): min=149, max=1005, avg=240.66, stdev=75.01 00:14:40.777 lat (usec): min=164, max=1034, avg=257.91, stdev=77.18 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:14:40.777 | 30.00th=[ 180], 40.00th=[ 194], 50.00th=[ 255], 60.00th=[ 265], 00:14:40.777 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 367], 00:14:40.777 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 578], 99.95th=[ 693], 00:14:40.777 | 99.99th=[ 1004] 00:14:40.777 write: IOPS=2424, BW=9698KiB/s (9931kB/s)(9708KiB/1001msec); 0 zone resets 00:14:40.777 slat (nsec): min=14145, max=84359, avg=23762.18, stdev=5921.02 00:14:40.777 clat (usec): min=97, max=1680, avg=166.95, stdev=51.56 00:14:40.777 lat (usec): min=115, max=1701, avg=190.71, stdev=53.40 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 124], 00:14:40.777 | 30.00th=[ 130], 40.00th=[ 139], 50.00th=[ 176], 60.00th=[ 190], 00:14:40.777 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 227], 00:14:40.777 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 318], 99.95th=[ 343], 00:14:40.777 | 99.99th=[ 1680] 00:14:40.777 bw ( KiB/s): min= 8192, max= 8192, per=19.39%, avg=8192.00, stdev= 0.00, samples=1 00:14:40.777 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:40.777 lat (usec) : 100=0.07%, 250=75.11%, 500=24.11%, 750=0.67% 00:14:40.777 lat (msec) : 2=0.04% 00:14:40.777 cpu : usr=1.70%, sys=7.60%, ctx=4480, majf=0, minf=12 00:14:40.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.777 issued rwts: total=2048,2427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.777 job3: (groupid=0, jobs=1): err= 0: pid=80683: Fri Jul 12 12:26:09 2024 00:14:40.777 read: IOPS=1685, BW=6741KiB/s (6903kB/s)(6748KiB/1001msec) 00:14:40.777 slat (nsec): min=12850, max=56358, avg=17622.54, stdev=4764.93 00:14:40.777 clat (usec): min=177, max=560, avg=284.38, stdev=45.08 00:14:40.777 lat (usec): min=195, max=583, avg=302.00, stdev=46.44 00:14:40.777 clat percentiles (usec): 00:14:40.777 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:14:40.777 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:14:40.777 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 363], 00:14:40.777 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 562], 00:14:40.778 | 99.99th=[ 562] 00:14:40.778 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:40.778 slat (usec): min=11, max=170, avg=25.91, stdev= 8.73 00:14:40.778 clat (usec): min=104, max=8005, avg=209.75, stdev=203.57 00:14:40.778 lat (usec): min=131, max=8058, avg=235.66, stdev=206.56 00:14:40.778 clat percentiles (usec): 00:14:40.778 | 1.00th=[ 118], 5.00th=[ 133], 10.00th=[ 147], 20.00th=[ 165], 00:14:40.778 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:14:40.778 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 273], 95.00th=[ 330], 00:14:40.778 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 1729], 99.95th=[ 4047], 00:14:40.778 | 99.99th=[ 8029] 00:14:40.778 bw ( KiB/s): min= 8192, max= 8192, per=19.39%, avg=8192.00, stdev= 0.00, samples=1 00:14:40.778 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:40.778 lat (usec) : 250=56.09%, 500=43.53%, 750=0.27% 00:14:40.778 lat (msec) : 2=0.05%, 10=0.05% 00:14:40.778 cpu : usr=2.40%, sys=6.00%, ctx=3741, majf=0, minf=13 00:14:40.778 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.778 issued rwts: total=1687,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.778 00:14:40.778 Run status group 0 (all jobs): 00:14:40.778 READ: bw=36.0MiB/s (37.7MB/s), 6741KiB/s-11.4MiB/s (6903kB/s-12.0MB/s), io=36.0MiB (37.8MB), run=1001-1001msec 00:14:40.778 WRITE: bw=41.3MiB/s (43.3MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=41.3MiB (43.3MB), run=1001-1001msec 00:14:40.778 00:14:40.778 Disk stats (read/write): 00:14:40.778 nvme0n1: ios=2610/2591, merge=0/0, ticks=461/331, in_queue=792, util=87.88% 00:14:40.778 nvme0n2: ios=2472/2560, merge=0/0, ticks=469/349, in_queue=818, util=88.77% 00:14:40.778 nvme0n3: ios=1629/2048, merge=0/0, ticks=420/379, in_queue=799, util=89.26% 00:14:40.778 nvme0n4: ios=1536/1622, merge=0/0, ticks=437/351, in_queue=788, util=89.09% 00:14:40.778 12:26:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:40.778 [global] 00:14:40.778 thread=1 00:14:40.778 invalidate=1 00:14:40.778 rw=randwrite 00:14:40.778 time_based=1 00:14:40.778 runtime=1 00:14:40.778 ioengine=libaio 00:14:40.778 direct=1 00:14:40.778 bs=4096 00:14:40.778 iodepth=1 00:14:40.778 norandommap=0 00:14:40.778 numjobs=1 00:14:40.778 00:14:40.778 verify_dump=1 00:14:40.778 verify_backlog=512 00:14:40.778 verify_state_save=0 00:14:40.778 do_verify=1 00:14:40.778 verify=crc32c-intel 00:14:40.778 [job0] 00:14:40.778 filename=/dev/nvme0n1 00:14:40.778 [job1] 00:14:40.778 filename=/dev/nvme0n2 00:14:40.778 [job2] 00:14:40.778 filename=/dev/nvme0n3 00:14:40.778 [job3] 00:14:40.778 filename=/dev/nvme0n4 00:14:40.778 Could not set queue depth (nvme0n1) 00:14:40.778 Could not set queue depth (nvme0n2) 00:14:40.778 Could not set queue depth (nvme0n3) 00:14:40.778 Could not set queue depth (nvme0n4) 00:14:40.778 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.778 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.778 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.778 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:40.778 fio-3.35 00:14:40.778 Starting 4 threads 00:14:42.164 00:14:42.164 job0: (groupid=0, jobs=1): err= 0: pid=80736: Fri Jul 12 12:26:10 2024 00:14:42.164 read: IOPS=2185, BW=8743KiB/s (8953kB/s)(8752KiB/1001msec) 00:14:42.164 slat (usec): min=7, max=110, avg=13.62, stdev= 4.98 00:14:42.164 clat (usec): min=130, max=4096, avg=218.86, stdev=111.46 00:14:42.164 lat (usec): min=142, max=4117, avg=232.47, stdev=110.81 00:14:42.164 clat percentiles (usec): 00:14:42.164 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:14:42.164 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 269], 00:14:42.164 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 322], 00:14:42.164 | 99.00th=[ 396], 99.50th=[ 449], 99.90th=[ 570], 99.95th=[ 938], 00:14:42.164 | 99.99th=[ 4113] 00:14:42.164 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:14:42.164 slat (usec): min=11, max=126, avg=20.07, stdev= 5.89 00:14:42.164 clat (usec): min=55, max=7508, avg=168.59, stdev=160.40 00:14:42.164 lat (usec): min=108, max=7528, avg=188.66, stdev=160.45 00:14:42.164 clat percentiles (usec): 00:14:42.164 | 1.00th=[ 95], 5.00th=[ 101], 10.00th=[ 104], 20.00th=[ 111], 00:14:42.164 | 30.00th=[ 116], 40.00th=[ 122], 50.00th=[ 130], 60.00th=[ 165], 00:14:42.164 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 255], 00:14:42.164 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 881], 99.95th=[ 1450], 00:14:42.164 | 99.99th=[ 7504] 00:14:42.164 bw ( KiB/s): min=12288, max=12288, per=31.61%, avg=12288.00, stdev= 0.00, samples=1 00:14:42.164 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:42.164 lat (usec) : 100=2.40%, 250=74.28%, 500=23.17%, 750=0.02%, 1000=0.06% 00:14:42.164 lat (msec) : 2=0.02%, 10=0.04% 00:14:42.164 cpu : usr=2.50%, sys=6.10%, ctx=4759, majf=0, minf=15 00:14:42.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.164 issued rwts: total=2188,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.164 job1: (groupid=0, jobs=1): err= 0: pid=80737: Fri Jul 12 12:26:10 2024 00:14:42.164 read: IOPS=1680, BW=6721KiB/s (6883kB/s)(6728KiB/1001msec) 00:14:42.164 slat (nsec): min=7912, max=55353, avg=11888.15, stdev=5358.33 00:14:42.164 clat (usec): min=212, max=1686, avg=284.23, stdev=50.80 00:14:42.164 lat (usec): min=222, max=1695, avg=296.12, stdev=50.51 00:14:42.164 clat percentiles (usec): 00:14:42.164 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 255], 00:14:42.164 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:14:42.164 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:14:42.164 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[ 938], 99.95th=[ 1680], 00:14:42.164 | 99.99th=[ 1680] 00:14:42.164 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:42.164 slat (usec): min=10, max=291, avg=19.13, stdev=14.12 00:14:42.164 clat (usec): min=3, max=590, avg=223.33, stdev=33.76 00:14:42.164 lat (usec): min=136, max=800, avg=242.47, stdev=34.56 00:14:42.164 clat percentiles (usec): 00:14:42.164 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:14:42.164 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 225], 00:14:42.165 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:14:42.165 | 99.00th=[ 302], 99.50th=[ 330], 99.90th=[ 515], 99.95th=[ 570], 00:14:42.165 | 99.99th=[ 594] 00:14:42.165 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:14:42.165 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:42.165 lat (usec) : 4=0.03%, 50=0.03%, 100=0.05%, 250=50.32%, 500=49.41% 00:14:42.165 lat (usec) : 750=0.11%, 1000=0.03% 00:14:42.165 lat (msec) : 2=0.03% 00:14:42.165 cpu : usr=1.40%, sys=4.80%, ctx=3768, majf=0, minf=7 00:14:42.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.165 issued rwts: total=1682,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.165 job2: (groupid=0, jobs=1): err= 0: pid=80738: Fri Jul 12 12:26:10 2024 00:14:42.165 read: IOPS=2723, BW=10.6MiB/s (11.2MB/s)(10.6MiB/1001msec) 00:14:42.165 slat (nsec): min=11895, max=58760, avg=16887.19, stdev=5239.06 00:14:42.165 clat (usec): min=144, max=552, avg=174.52, stdev=17.28 00:14:42.165 lat (usec): min=159, max=566, avg=191.41, stdev=19.22 00:14:42.165 clat percentiles (usec): 00:14:42.165 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:14:42.165 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:14:42.165 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:14:42.165 | 99.00th=[ 215], 99.50th=[ 233], 99.90th=[ 371], 99.95th=[ 388], 00:14:42.165 | 99.99th=[ 553] 00:14:42.165 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:14:42.165 slat (usec): min=14, max=117, avg=23.46, stdev= 7.57 00:14:42.165 clat (usec): min=95, max=1593, avg=128.76, stdev=32.09 00:14:42.165 lat (usec): min=116, max=1613, avg=152.23, stdev=33.38 00:14:42.165 clat percentiles (usec): 00:14:42.165 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 118], 00:14:42.165 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:14:42.165 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:14:42.165 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 412], 99.95th=[ 570], 00:14:42.165 | 99.99th=[ 1598] 00:14:42.165 bw ( KiB/s): min=12288, max=12288, per=31.61%, avg=12288.00, stdev= 0.00, samples=1 00:14:42.165 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:42.165 lat (usec) : 100=0.21%, 250=99.53%, 500=0.19%, 750=0.05% 00:14:42.165 lat (msec) : 2=0.02% 00:14:42.165 cpu : usr=3.20%, sys=8.60%, ctx=5807, majf=0, minf=7 00:14:42.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.165 issued rwts: total=2726,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.165 job3: (groupid=0, jobs=1): err= 0: pid=80739: Fri Jul 12 12:26:10 2024 00:14:42.165 read: IOPS=1683, BW=6733KiB/s (6895kB/s)(6740KiB/1001msec) 00:14:42.165 slat (nsec): min=7356, max=67814, avg=14575.58, stdev=5495.40 00:14:42.165 clat (usec): min=167, max=1763, avg=280.81, stdev=52.66 00:14:42.165 lat (usec): min=181, max=1778, avg=295.39, stdev=52.02 00:14:42.165 clat percentiles (usec): 00:14:42.165 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 251], 00:14:42.165 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:14:42.165 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:14:42.165 | 99.00th=[ 396], 99.50th=[ 465], 99.90th=[ 971], 99.95th=[ 1762], 00:14:42.165 | 99.99th=[ 1762] 00:14:42.165 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:42.165 slat (usec): min=11, max=375, avg=20.82, stdev=14.35 00:14:42.165 clat (usec): min=14, max=715, avg=221.44, stdev=34.40 00:14:42.165 lat (usec): min=165, max=735, avg=242.26, stdev=34.91 00:14:42.165 clat percentiles (usec): 00:14:42.165 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:14:42.165 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:14:42.165 | 70.00th=[ 233], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 273], 00:14:42.165 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 490], 99.95th=[ 660], 00:14:42.165 | 99.99th=[ 717] 00:14:42.165 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:14:42.165 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:42.165 lat (usec) : 20=0.03%, 100=0.08%, 250=53.07%, 500=46.61%, 750=0.16% 00:14:42.165 lat (usec) : 1000=0.03% 00:14:42.165 lat (msec) : 2=0.03% 00:14:42.165 cpu : usr=1.30%, sys=5.80%, ctx=3765, majf=0, minf=16 00:14:42.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.165 issued rwts: total=1685,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.165 00:14:42.165 Run status group 0 (all jobs): 00:14:42.165 READ: bw=32.3MiB/s (33.9MB/s), 6721KiB/s-10.6MiB/s (6883kB/s-11.2MB/s), io=32.3MiB (33.9MB), run=1001-1001msec 00:14:42.165 WRITE: bw=38.0MiB/s (39.8MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.0MiB (39.8MB), run=1001-1001msec 00:14:42.165 00:14:42.165 Disk stats (read/write): 00:14:42.165 nvme0n1: ios=2098/2217, merge=0/0, ticks=443/345, in_queue=788, util=87.98% 00:14:42.165 nvme0n2: ios=1584/1715, merge=0/0, ticks=426/338, in_queue=764, util=89.20% 00:14:42.165 nvme0n3: ios=2414/2560, merge=0/0, ticks=433/358, in_queue=791, util=89.40% 00:14:42.165 nvme0n4: ios=1536/1719, merge=0/0, ticks=419/354, in_queue=773, util=89.66% 00:14:42.165 12:26:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:42.165 [global] 00:14:42.165 thread=1 00:14:42.165 invalidate=1 00:14:42.165 rw=write 00:14:42.165 time_based=1 00:14:42.165 runtime=1 00:14:42.165 ioengine=libaio 00:14:42.165 direct=1 00:14:42.165 bs=4096 00:14:42.165 iodepth=128 00:14:42.165 norandommap=0 00:14:42.165 numjobs=1 00:14:42.165 00:14:42.165 verify_dump=1 00:14:42.165 verify_backlog=512 00:14:42.165 verify_state_save=0 00:14:42.165 do_verify=1 00:14:42.165 verify=crc32c-intel 00:14:42.165 [job0] 00:14:42.165 filename=/dev/nvme0n1 00:14:42.165 [job1] 00:14:42.165 filename=/dev/nvme0n2 00:14:42.165 [job2] 00:14:42.165 filename=/dev/nvme0n3 00:14:42.165 [job3] 00:14:42.165 filename=/dev/nvme0n4 00:14:42.165 Could not set queue depth (nvme0n1) 00:14:42.165 Could not set queue depth (nvme0n2) 00:14:42.165 Could not set queue depth (nvme0n3) 00:14:42.165 Could not set queue depth (nvme0n4) 00:14:42.165 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:42.165 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:42.165 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:42.165 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:42.165 fio-3.35 00:14:42.165 Starting 4 threads 00:14:43.536 00:14:43.536 job0: (groupid=0, jobs=1): err= 0: pid=80800: Fri Jul 12 12:26:12 2024 00:14:43.536 read: IOPS=1052, BW=4211KiB/s (4312kB/s)(4228KiB/1004msec) 00:14:43.536 slat (usec): min=7, max=12508, avg=353.98, stdev=1298.33 00:14:43.536 clat (usec): min=2632, max=59580, avg=41282.21, stdev=8022.09 00:14:43.536 lat (usec): min=8046, max=59593, avg=41636.19, stdev=7906.65 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[25035], 5.00th=[29754], 10.00th=[31851], 20.00th=[35914], 00:14:43.536 | 30.00th=[36963], 40.00th=[38011], 50.00th=[40109], 60.00th=[42206], 00:14:43.536 | 70.00th=[45351], 80.00th=[49546], 90.00th=[53216], 95.00th=[54264], 00:14:43.536 | 99.00th=[57410], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:14:43.536 | 99.99th=[59507] 00:14:43.536 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:14:43.536 slat (usec): min=11, max=14835, avg=400.08, stdev=1392.35 00:14:43.536 clat (usec): min=19109, max=90748, avg=53685.72, stdev=23005.11 00:14:43.536 lat (usec): min=24006, max=90778, avg=54085.80, stdev=23131.09 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[23987], 5.00th=[28705], 10.00th=[30802], 20.00th=[31327], 00:14:43.536 | 30.00th=[32113], 40.00th=[37487], 50.00th=[41681], 60.00th=[61604], 00:14:43.536 | 70.00th=[77071], 80.00th=[82314], 90.00th=[85459], 95.00th=[86508], 00:14:43.536 | 99.00th=[89654], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:14:43.536 | 99.99th=[90702] 00:14:43.536 bw ( KiB/s): min= 4240, max= 7296, per=10.70%, avg=5768.00, stdev=2160.92, samples=2 00:14:43.536 iops : min= 1060, max= 1824, avg=1442.00, stdev=540.23, samples=2 00:14:43.536 lat (msec) : 4=0.04%, 10=0.08%, 20=0.31%, 50=66.06%, 100=33.51% 00:14:43.536 cpu : usr=1.69%, sys=4.59%, ctx=427, majf=0, minf=11 00:14:43.536 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:14:43.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.536 issued rwts: total=1057,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.536 job1: (groupid=0, jobs=1): err= 0: pid=80801: Fri Jul 12 12:26:12 2024 00:14:43.536 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:14:43.536 slat (usec): min=7, max=7120, avg=121.55, stdev=528.36 00:14:43.536 clat (usec): min=10071, max=30575, avg=15662.46, stdev=3486.70 00:14:43.536 lat (usec): min=10090, max=30591, avg=15784.01, stdev=3532.16 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12649], 20.00th=[12780], 00:14:43.536 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13960], 60.00th=[16057], 00:14:43.536 | 70.00th=[18482], 80.00th=[19006], 90.00th=[20317], 95.00th=[21365], 00:14:43.536 | 99.00th=[24249], 99.50th=[28705], 99.90th=[30540], 99.95th=[30540], 00:14:43.536 | 99.99th=[30540] 00:14:43.536 write: IOPS=4300, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1003msec); 0 zone resets 00:14:43.536 slat (usec): min=11, max=7407, avg=108.34, stdev=537.33 00:14:43.536 clat (usec): min=2037, max=35345, avg=14397.52, stdev=4841.03 00:14:43.536 lat (usec): min=6770, max=35371, avg=14505.86, stdev=4892.62 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:14:43.536 | 30.00th=[10945], 40.00th=[11207], 50.00th=[13173], 60.00th=[13829], 00:14:43.536 | 70.00th=[14615], 80.00th=[18744], 90.00th=[23200], 95.00th=[23725], 00:14:43.536 | 99.00th=[30540], 99.50th=[32375], 99.90th=[35390], 99.95th=[35390], 00:14:43.536 | 99.99th=[35390] 00:14:43.536 bw ( KiB/s): min=16360, max=17128, per=31.06%, avg=16744.00, stdev=543.06, samples=2 00:14:43.536 iops : min= 4090, max= 4282, avg=4186.00, stdev=135.76, samples=2 00:14:43.536 lat (msec) : 4=0.01%, 10=1.13%, 20=86.94%, 50=11.92% 00:14:43.536 cpu : usr=4.09%, sys=12.28%, ctx=355, majf=0, minf=10 00:14:43.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:43.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.536 issued rwts: total=4096,4313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.536 job2: (groupid=0, jobs=1): err= 0: pid=80802: Fri Jul 12 12:26:12 2024 00:14:43.536 read: IOPS=5691, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1001msec) 00:14:43.536 slat (usec): min=6, max=3178, avg=81.66, stdev=372.67 00:14:43.536 clat (usec): min=301, max=14025, avg=10896.32, stdev=1053.78 00:14:43.536 lat (usec): min=2582, max=14047, avg=10977.98, stdev=988.28 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[ 5407], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:14:43.536 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[10945], 00:14:43.536 | 70.00th=[11076], 80.00th=[11076], 90.00th=[11338], 95.00th=[12518], 00:14:43.536 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:14:43.536 | 99.99th=[14091] 00:14:43.536 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:14:43.536 slat (usec): min=10, max=2608, avg=79.63, stdev=320.44 00:14:43.536 clat (usec): min=7751, max=13466, avg=10489.02, stdev=684.91 00:14:43.536 lat (usec): min=7769, max=13488, avg=10568.65, stdev=611.04 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[ 8356], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10159], 00:14:43.536 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:14:43.536 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11076], 00:14:43.536 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13435], 99.95th=[13435], 00:14:43.536 | 99.99th=[13435] 00:14:43.536 bw ( KiB/s): min=24576, max=24576, per=45.60%, avg=24576.00, stdev= 0.00, samples=1 00:14:43.536 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:14:43.536 lat (usec) : 500=0.01% 00:14:43.536 lat (msec) : 4=0.27%, 10=6.88%, 20=92.84% 00:14:43.536 cpu : usr=4.30%, sys=17.50%, ctx=372, majf=0, minf=7 00:14:43.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:43.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.536 issued rwts: total=5697,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.536 job3: (groupid=0, jobs=1): err= 0: pid=80803: Fri Jul 12 12:26:12 2024 00:14:43.536 read: IOPS=1031, BW=4128KiB/s (4227kB/s)(4140KiB/1003msec) 00:14:43.536 slat (usec): min=7, max=5777, avg=330.07, stdev=1015.95 00:14:43.536 clat (usec): min=181, max=79400, avg=44177.74, stdev=9827.66 00:14:43.536 lat (usec): min=3140, max=79421, avg=44507.81, stdev=9777.21 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[ 3294], 5.00th=[35914], 10.00th=[36963], 20.00th=[37487], 00:14:43.536 | 30.00th=[38011], 40.00th=[40109], 50.00th=[41681], 60.00th=[44303], 00:14:43.536 | 70.00th=[47449], 80.00th=[50594], 90.00th=[53740], 95.00th=[66847], 00:14:43.536 | 99.00th=[77071], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:14:43.536 | 99.99th=[79168] 00:14:43.536 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:14:43.536 slat (usec): min=14, max=14528, avg=422.31, stdev=1500.61 00:14:43.536 clat (usec): min=3335, max=87509, avg=52048.64, stdev=22936.30 00:14:43.536 lat (usec): min=3361, max=87962, avg=52470.95, stdev=23049.42 00:14:43.536 clat percentiles (usec): 00:14:43.536 | 1.00th=[ 3785], 5.00th=[27132], 10.00th=[29754], 20.00th=[31327], 00:14:43.536 | 30.00th=[32637], 40.00th=[38011], 50.00th=[41681], 60.00th=[54264], 00:14:43.536 | 70.00th=[76022], 80.00th=[81265], 90.00th=[84411], 95.00th=[86508], 00:14:43.536 | 99.00th=[86508], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:14:43.536 | 99.99th=[87557] 00:14:43.536 bw ( KiB/s): min= 4416, max= 6922, per=10.52%, avg=5669.00, stdev=1772.01, samples=2 00:14:43.536 iops : min= 1104, max= 1730, avg=1417.00, stdev=442.65, samples=2 00:14:43.536 lat (usec) : 250=0.04% 00:14:43.536 lat (msec) : 4=1.05%, 10=0.23%, 20=0.35%, 50=63.98%, 100=34.34% 00:14:43.536 cpu : usr=1.50%, sys=4.79%, ctx=433, majf=0, minf=7 00:14:43.536 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:14:43.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:43.536 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:43.536 00:14:43.536 Run status group 0 (all jobs): 00:14:43.536 READ: bw=46.2MiB/s (48.5MB/s), 4128KiB/s-22.2MiB/s (4227kB/s-23.3MB/s), io=46.4MiB (48.7MB), run=1001-1004msec 00:14:43.536 WRITE: bw=52.6MiB/s (55.2MB/s), 6120KiB/s-24.0MiB/s (6266kB/s-25.1MB/s), io=52.8MiB (55.4MB), run=1001-1004msec 00:14:43.536 00:14:43.536 Disk stats (read/write): 00:14:43.536 nvme0n1: ios=1074/1358, merge=0/0, ticks=10751/16142, in_queue=26893, util=89.07% 00:14:43.536 nvme0n2: ios=3499/3584, merge=0/0, ticks=18273/15284, in_queue=33557, util=89.29% 00:14:43.536 nvme0n3: ios=5120/5152, merge=0/0, ticks=12210/11347, in_queue=23557, util=89.55% 00:14:43.536 nvme0n4: ios=1024/1289, merge=0/0, ticks=10454/15744, in_queue=26198, util=89.40% 00:14:43.536 12:26:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:43.536 [global] 00:14:43.536 thread=1 00:14:43.536 invalidate=1 00:14:43.536 rw=randwrite 00:14:43.536 time_based=1 00:14:43.536 runtime=1 00:14:43.536 ioengine=libaio 00:14:43.536 direct=1 00:14:43.536 bs=4096 00:14:43.536 iodepth=128 00:14:43.536 norandommap=0 00:14:43.536 numjobs=1 00:14:43.536 00:14:43.536 verify_dump=1 00:14:43.536 verify_backlog=512 00:14:43.536 verify_state_save=0 00:14:43.536 do_verify=1 00:14:43.536 verify=crc32c-intel 00:14:43.536 [job0] 00:14:43.536 filename=/dev/nvme0n1 00:14:43.536 [job1] 00:14:43.536 filename=/dev/nvme0n2 00:14:43.536 [job2] 00:14:43.536 filename=/dev/nvme0n3 00:14:43.536 [job3] 00:14:43.536 filename=/dev/nvme0n4 00:14:43.536 Could not set queue depth (nvme0n1) 00:14:43.536 Could not set queue depth (nvme0n2) 00:14:43.536 Could not set queue depth (nvme0n3) 00:14:43.536 Could not set queue depth (nvme0n4) 00:14:43.536 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.536 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.536 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.536 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.536 fio-3.35 00:14:43.536 Starting 4 threads 00:14:44.911 00:14:44.911 job0: (groupid=0, jobs=1): err= 0: pid=80856: Fri Jul 12 12:26:13 2024 00:14:44.911 read: IOPS=3124, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1004msec) 00:14:44.911 slat (usec): min=7, max=12220, avg=151.66, stdev=812.88 00:14:44.911 clat (usec): min=1339, max=44911, avg=19796.92, stdev=6576.69 00:14:44.911 lat (usec): min=4821, max=44936, avg=19948.58, stdev=6567.70 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[ 5145], 5.00th=[13042], 10.00th=[14353], 20.00th=[14877], 00:14:44.911 | 30.00th=[15139], 40.00th=[15401], 50.00th=[19268], 60.00th=[21890], 00:14:44.911 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24773], 95.00th=[33817], 00:14:44.911 | 99.00th=[44303], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:44.911 | 99.99th=[44827] 00:14:44.911 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:14:44.911 slat (usec): min=11, max=12079, avg=139.14, stdev=745.09 00:14:44.911 clat (usec): min=9079, max=36105, avg=17753.07, stdev=6759.14 00:14:44.911 lat (usec): min=11016, max=36124, avg=17892.21, stdev=6779.45 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11600], 20.00th=[12649], 00:14:44.911 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15270], 60.00th=[15533], 00:14:44.911 | 70.00th=[17695], 80.00th=[23200], 90.00th=[29492], 95.00th=[31851], 00:14:44.911 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:14:44.911 | 99.99th=[35914] 00:14:44.911 bw ( KiB/s): min=12749, max=15398, per=28.54%, avg=14073.50, stdev=1873.13, samples=2 00:14:44.911 iops : min= 3187, max= 3849, avg=3518.00, stdev=468.10, samples=2 00:14:44.911 lat (msec) : 2=0.01%, 10=1.29%, 20=64.47%, 50=34.22% 00:14:44.911 cpu : usr=2.99%, sys=9.87%, ctx=213, majf=0, minf=13 00:14:44.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.911 issued rwts: total=3137,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.911 job1: (groupid=0, jobs=1): err= 0: pid=80861: Fri Jul 12 12:26:13 2024 00:14:44.911 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:14:44.911 slat (usec): min=8, max=13849, avg=307.81, stdev=1197.53 00:14:44.911 clat (usec): min=21565, max=51941, avg=38754.82, stdev=5814.60 00:14:44.911 lat (usec): min=21582, max=51953, avg=39062.63, stdev=5751.23 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[25035], 5.00th=[30016], 10.00th=[34341], 20.00th=[34866], 00:14:44.911 | 30.00th=[34866], 40.00th=[35390], 50.00th=[36439], 60.00th=[40109], 00:14:44.911 | 70.00th=[41681], 80.00th=[45351], 90.00th=[46924], 95.00th=[48497], 00:14:44.911 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:14:44.911 | 99.99th=[52167] 00:14:44.911 write: IOPS=1610, BW=6443KiB/s (6598kB/s)(6488KiB/1007msec); 0 zone resets 00:14:44.911 slat (usec): min=10, max=9453, avg=321.22, stdev=1258.47 00:14:44.911 clat (usec): min=3021, max=76985, avg=41145.25, stdev=18345.21 00:14:44.911 lat (usec): min=7461, max=77011, avg=41466.47, stdev=18430.16 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[10552], 5.00th=[22414], 10.00th=[25035], 20.00th=[27395], 00:14:44.911 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30802], 60.00th=[35914], 00:14:44.911 | 70.00th=[49546], 80.00th=[64226], 90.00th=[71828], 95.00th=[74974], 00:14:44.911 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:14:44.911 | 99.99th=[77071] 00:14:44.911 bw ( KiB/s): min= 5633, max= 6632, per=12.44%, avg=6132.50, stdev=706.40, samples=2 00:14:44.911 iops : min= 1408, max= 1658, avg=1533.00, stdev=176.78, samples=2 00:14:44.911 lat (msec) : 4=0.03%, 10=0.41%, 20=1.17%, 50=82.62%, 100=15.77% 00:14:44.911 cpu : usr=1.49%, sys=3.67%, ctx=399, majf=0, minf=7 00:14:44.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:14:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.911 issued rwts: total=1536,1622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.911 job2: (groupid=0, jobs=1): err= 0: pid=80862: Fri Jul 12 12:26:13 2024 00:14:44.911 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:14:44.911 slat (usec): min=8, max=6003, avg=303.86, stdev=1004.42 00:14:44.911 clat (usec): min=17034, max=54976, avg=38531.94, stdev=5877.60 00:14:44.911 lat (usec): min=17045, max=54997, avg=38835.80, stdev=5830.29 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[22414], 5.00th=[30540], 10.00th=[33424], 20.00th=[34866], 00:14:44.911 | 30.00th=[34866], 40.00th=[35390], 50.00th=[36439], 60.00th=[39060], 00:14:44.911 | 70.00th=[42730], 80.00th=[44303], 90.00th=[47449], 95.00th=[47973], 00:14:44.911 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:14:44.911 | 99.99th=[54789] 00:14:44.911 write: IOPS=1568, BW=6273KiB/s (6423kB/s)(6304KiB/1005msec); 0 zone resets 00:14:44.911 slat (usec): min=12, max=9063, avg=332.14, stdev=1263.61 00:14:44.911 clat (usec): min=4596, max=77049, avg=42215.74, stdev=17816.08 00:14:44.911 lat (usec): min=7427, max=77076, avg=42547.88, stdev=17899.80 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[11076], 5.00th=[23725], 10.00th=[26346], 20.00th=[28967], 00:14:44.911 | 30.00th=[29492], 40.00th=[30278], 50.00th=[35390], 60.00th=[39584], 00:14:44.911 | 70.00th=[48497], 80.00th=[64226], 90.00th=[71828], 95.00th=[76022], 00:14:44.911 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:14:44.911 | 99.99th=[77071] 00:14:44.911 bw ( KiB/s): min= 5848, max= 6440, per=12.46%, avg=6144.00, stdev=418.61, samples=2 00:14:44.911 iops : min= 1462, max= 1610, avg=1536.00, stdev=104.65, samples=2 00:14:44.911 lat (msec) : 10=0.32%, 20=1.19%, 50=83.55%, 100=14.94% 00:14:44.911 cpu : usr=1.10%, sys=4.08%, ctx=402, majf=0, minf=19 00:14:44.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:14:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.911 issued rwts: total=1536,1576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.911 job3: (groupid=0, jobs=1): err= 0: pid=80863: Fri Jul 12 12:26:13 2024 00:14:44.911 read: IOPS=5233, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:14:44.911 slat (usec): min=4, max=2780, avg=90.12, stdev=379.98 00:14:44.911 clat (usec): min=454, max=14754, avg=11904.27, stdev=1656.35 00:14:44.911 lat (usec): min=2526, max=15782, avg=11994.39, stdev=1625.14 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[ 5407], 5.00th=[10290], 10.00th=[10421], 20.00th=[10814], 00:14:44.911 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[13042], 00:14:44.911 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13829], 00:14:44.911 | 99.00th=[14353], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:14:44.911 | 99.99th=[14746] 00:14:44.911 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:14:44.911 slat (usec): min=10, max=4413, avg=86.63, stdev=367.52 00:14:44.911 clat (usec): min=7942, max=17047, avg=11411.00, stdev=1457.28 00:14:44.911 lat (usec): min=8365, max=17069, avg=11497.63, stdev=1436.98 00:14:44.911 clat percentiles (usec): 00:14:44.911 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10159], 20.00th=[10290], 00:14:44.911 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[11207], 00:14:44.911 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:14:44.911 | 99.00th=[16581], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:14:44.911 | 99.99th=[17171] 00:14:44.911 bw ( KiB/s): min=21050, max=23952, per=45.63%, avg=22501.00, stdev=2052.02, samples=2 00:14:44.911 iops : min= 5262, max= 5988, avg=5625.00, stdev=513.36, samples=2 00:14:44.911 lat (usec) : 500=0.01% 00:14:44.911 lat (msec) : 4=0.29%, 10=3.87%, 20=95.83% 00:14:44.911 cpu : usr=4.59%, sys=14.77%, ctx=494, majf=0, minf=11 00:14:44.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:44.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.912 issued rwts: total=5249,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.912 00:14:44.912 Run status group 0 (all jobs): 00:14:44.912 READ: bw=44.4MiB/s (46.6MB/s), 6101KiB/s-20.4MiB/s (6248kB/s-21.4MB/s), io=44.8MiB (46.9MB), run=1003-1007msec 00:14:44.912 WRITE: bw=48.2MiB/s (50.5MB/s), 6273KiB/s-21.9MiB/s (6423kB/s-23.0MB/s), io=48.5MiB (50.8MB), run=1003-1007msec 00:14:44.912 00:14:44.912 Disk stats (read/write): 00:14:44.912 nvme0n1: ios=2706/3072, merge=0/0, ticks=12275/12501, in_queue=24776, util=88.16% 00:14:44.912 nvme0n2: ios=1156/1536, merge=0/0, ticks=11188/15041, in_queue=26229, util=87.61% 00:14:44.912 nvme0n3: ios=1083/1536, merge=0/0, ticks=10467/15882, in_queue=26349, util=88.96% 00:14:44.912 nvme0n4: ios=4512/4608, merge=0/0, ticks=12479/11360, in_queue=23839, util=89.73% 00:14:44.912 12:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:44.912 12:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=80877 00:14:44.912 12:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:44.912 12:26:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:44.912 [global] 00:14:44.912 thread=1 00:14:44.912 invalidate=1 00:14:44.912 rw=read 00:14:44.912 time_based=1 00:14:44.912 runtime=10 00:14:44.912 ioengine=libaio 00:14:44.912 direct=1 00:14:44.912 bs=4096 00:14:44.912 iodepth=1 00:14:44.912 norandommap=1 00:14:44.912 numjobs=1 00:14:44.912 00:14:44.912 [job0] 00:14:44.912 filename=/dev/nvme0n1 00:14:44.912 [job1] 00:14:44.912 filename=/dev/nvme0n2 00:14:44.912 [job2] 00:14:44.912 filename=/dev/nvme0n3 00:14:44.912 [job3] 00:14:44.912 filename=/dev/nvme0n4 00:14:44.912 Could not set queue depth (nvme0n1) 00:14:44.912 Could not set queue depth (nvme0n2) 00:14:44.912 Could not set queue depth (nvme0n3) 00:14:44.912 Could not set queue depth (nvme0n4) 00:14:44.912 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.912 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.912 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.912 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:44.912 fio-3.35 00:14:44.912 Starting 4 threads 00:14:48.190 12:26:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:48.190 fio: pid=80920, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:48.190 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=56438784, buflen=4096 00:14:48.190 12:26:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:48.190 fio: pid=80919, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:48.190 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=61665280, buflen=4096 00:14:48.190 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.190 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:48.448 fio: pid=80917, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:48.448 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=57479168, buflen=4096 00:14:48.448 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.448 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:48.708 fio: pid=80918, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:48.708 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=54353920, buflen=4096 00:14:48.708 00:14:48.708 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80917: Fri Jul 12 12:26:17 2024 00:14:48.708 read: IOPS=4115, BW=16.1MiB/s (16.9MB/s)(54.8MiB/3410msec) 00:14:48.708 slat (usec): min=8, max=14549, avg=16.03, stdev=196.53 00:14:48.708 clat (usec): min=2, max=4581, avg=225.62, stdev=75.19 00:14:48.708 lat (usec): min=148, max=14915, avg=241.65, stdev=211.20 00:14:48.708 clat percentiles (usec): 00:14:48.708 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:14:48.708 | 30.00th=[ 186], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:14:48.708 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 306], 00:14:48.708 | 99.00th=[ 449], 99.50th=[ 490], 99.90th=[ 709], 99.95th=[ 963], 00:14:48.708 | 99.99th=[ 1778] 00:14:48.708 bw ( KiB/s): min=14232, max=23264, per=28.25%, avg=17256.00, stdev=3286.49, samples=6 00:14:48.708 iops : min= 3558, max= 5816, avg=4314.00, stdev=821.62, samples=6 00:14:48.708 lat (usec) : 4=0.01%, 250=74.88%, 500=24.72%, 750=0.30%, 1000=0.05% 00:14:48.708 lat (msec) : 2=0.04%, 10=0.01% 00:14:48.708 cpu : usr=0.97%, sys=4.99%, ctx=14050, majf=0, minf=1 00:14:48.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 issued rwts: total=14034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.708 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80918: Fri Jul 12 12:26:17 2024 00:14:48.708 read: IOPS=3610, BW=14.1MiB/s (14.8MB/s)(51.8MiB/3676msec) 00:14:48.708 slat (usec): min=8, max=11834, avg=17.23, stdev=169.46 00:14:48.708 clat (usec): min=140, max=35234, avg=258.24, stdev=330.40 00:14:48.708 lat (usec): min=154, max=35273, avg=275.46, stdev=371.76 00:14:48.708 clat percentiles (usec): 00:14:48.708 | 1.00th=[ 178], 5.00th=[ 198], 10.00th=[ 217], 20.00th=[ 227], 00:14:48.708 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:14:48.708 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 318], 00:14:48.708 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 1663], 99.95th=[ 3556], 00:14:48.708 | 99.99th=[ 7635] 00:14:48.708 bw ( KiB/s): min=12144, max=16008, per=23.71%, avg=14486.29, stdev=1431.93, samples=7 00:14:48.708 iops : min= 3036, max= 4002, avg=3621.57, stdev=357.98, samples=7 00:14:48.708 lat (usec) : 250=57.38%, 500=42.16%, 750=0.29%, 1000=0.04% 00:14:48.708 lat (msec) : 2=0.04%, 4=0.05%, 10=0.03%, 50=0.01% 00:14:48.708 cpu : usr=1.03%, sys=4.71%, ctx=13285, majf=0, minf=1 00:14:48.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 issued rwts: total=13271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.708 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80919: Fri Jul 12 12:26:17 2024 00:14:48.708 read: IOPS=4736, BW=18.5MiB/s (19.4MB/s)(58.8MiB/3179msec) 00:14:48.708 slat (usec): min=9, max=15040, avg=16.48, stdev=136.63 00:14:48.708 clat (usec): min=2, max=2174, avg=193.08, stdev=57.84 00:14:48.708 lat (usec): min=156, max=15618, avg=209.56, stdev=151.81 00:14:48.708 clat percentiles (usec): 00:14:48.708 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:14:48.708 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:14:48.708 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 297], 00:14:48.708 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 603], 99.95th=[ 832], 00:14:48.708 | 99.99th=[ 2114] 00:14:48.708 bw ( KiB/s): min=16824, max=20520, per=32.01%, avg=19556.00, stdev=1413.17, samples=6 00:14:48.708 iops : min= 4206, max= 5130, avg=4889.00, stdev=353.29, samples=6 00:14:48.708 lat (usec) : 4=0.01%, 250=93.91%, 500=5.81%, 750=0.21%, 1000=0.03% 00:14:48.708 lat (msec) : 2=0.01%, 4=0.01% 00:14:48.708 cpu : usr=1.57%, sys=5.98%, ctx=15061, majf=0, minf=1 00:14:48.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 issued rwts: total=15056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.708 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80920: Fri Jul 12 12:26:17 2024 00:14:48.708 read: IOPS=4698, BW=18.4MiB/s (19.2MB/s)(53.8MiB/2933msec) 00:14:48.708 slat (nsec): min=11841, max=82504, avg=14796.73, stdev=2839.63 00:14:48.708 clat (usec): min=148, max=3003, avg=196.55, stdev=53.32 00:14:48.708 lat (usec): min=161, max=3046, avg=211.35, stdev=53.92 00:14:48.708 clat percentiles (usec): 00:14:48.708 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:14:48.708 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:14:48.708 | 70.00th=[ 192], 80.00th=[ 210], 90.00th=[ 273], 95.00th=[ 289], 00:14:48.708 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 416], 99.95th=[ 461], 00:14:48.708 | 99.99th=[ 2073] 00:14:48.708 bw ( KiB/s): min=13536, max=20784, per=30.44%, avg=18593.60, stdev=3184.87, samples=5 00:14:48.708 iops : min= 3384, max= 5196, avg=4648.40, stdev=796.22, samples=5 00:14:48.708 lat (usec) : 250=83.11%, 500=16.84%, 750=0.02% 00:14:48.708 lat (msec) : 2=0.01%, 4=0.01% 00:14:48.708 cpu : usr=1.50%, sys=5.87%, ctx=13780, majf=0, minf=1 00:14:48.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:48.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.708 issued rwts: total=13780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:48.708 00:14:48.708 Run status group 0 (all jobs): 00:14:48.708 READ: bw=59.7MiB/s (62.6MB/s), 14.1MiB/s-18.5MiB/s (14.8MB/s-19.4MB/s), io=219MiB (230MB), run=2933-3676msec 00:14:48.708 00:14:48.708 Disk stats (read/write): 00:14:48.708 nvme0n1: ios=13895/0, merge=0/0, ticks=3049/0, in_queue=3049, util=95.11% 00:14:48.708 nvme0n2: ios=13013/0, merge=0/0, ticks=3305/0, in_queue=3305, util=95.50% 00:14:48.708 nvme0n3: ios=14904/0, merge=0/0, ticks=2888/0, in_queue=2888, util=96.18% 00:14:48.708 nvme0n4: ios=13493/0, merge=0/0, ticks=2698/0, in_queue=2698, util=96.73% 00:14:48.708 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.708 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:48.972 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:48.972 12:26:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:49.538 12:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:49.538 12:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:49.538 12:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:49.538 12:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:49.797 12:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:49.797 12:26:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 80877 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.363 nvmf hotplug test: fio failed as expected 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:50.363 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.621 rmmod nvme_tcp 00:14:50.621 rmmod nvme_fabrics 00:14:50.621 rmmod nvme_keyring 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 80488 ']' 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 80488 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 80488 ']' 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 80488 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80488 00:14:50.621 killing process with pid 80488 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80488' 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 80488 00:14:50.621 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 80488 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:50.880 00:14:50.880 real 0m19.802s 00:14:50.880 user 1m14.726s 00:14:50.880 sys 0m10.477s 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:50.880 12:26:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.880 ************************************ 00:14:50.880 END TEST nvmf_fio_target 00:14:50.880 ************************************ 00:14:50.880 12:26:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:50.880 12:26:19 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:50.880 12:26:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:50.880 12:26:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.880 12:26:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.880 ************************************ 00:14:50.880 START TEST nvmf_bdevio 00:14:50.880 ************************************ 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:50.880 * Looking for test storage... 00:14:50.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:50.880 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.881 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:51.139 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:51.139 Cannot find device "nvmf_tgt_br" 00:14:51.139 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:14:51.139 12:26:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.139 Cannot find device "nvmf_tgt_br2" 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:51.139 Cannot find device "nvmf_tgt_br" 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:51.139 Cannot find device "nvmf_tgt_br2" 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.139 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:51.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:14:51.398 00:14:51.398 --- 10.0.0.2 ping statistics --- 00:14:51.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.398 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:51.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:51.398 00:14:51.398 --- 10.0.0.3 ping statistics --- 00:14:51.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.398 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:51.398 00:14:51.398 --- 10.0.0.1 ping statistics --- 00:14:51.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.398 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=81190 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 81190 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 81190 ']' 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.398 12:26:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:51.398 [2024-07-12 12:26:20.370113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:51.398 [2024-07-12 12:26:20.370199] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.657 [2024-07-12 12:26:20.504765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.657 [2024-07-12 12:26:20.588393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.657 [2024-07-12 12:26:20.588462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.657 [2024-07-12 12:26:20.588473] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.657 [2024-07-12 12:26:20.588481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.657 [2024-07-12 12:26:20.588488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.657 [2024-07-12 12:26:20.588620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:51.657 [2024-07-12 12:26:20.589286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:51.657 [2024-07-12 12:26:20.589477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.657 [2024-07-12 12:26:20.589481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:51.657 [2024-07-12 12:26:20.643379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.591 [2024-07-12 12:26:21.351868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.591 Malloc0 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.591 [2024-07-12 12:26:21.427677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:52.591 { 00:14:52.591 "params": { 00:14:52.591 "name": "Nvme$subsystem", 00:14:52.591 "trtype": "$TEST_TRANSPORT", 00:14:52.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:52.591 "adrfam": "ipv4", 00:14:52.591 "trsvcid": "$NVMF_PORT", 00:14:52.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:52.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:52.591 "hdgst": ${hdgst:-false}, 00:14:52.591 "ddgst": ${ddgst:-false} 00:14:52.591 }, 00:14:52.591 "method": "bdev_nvme_attach_controller" 00:14:52.591 } 00:14:52.591 EOF 00:14:52.591 )") 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:52.591 12:26:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:52.591 "params": { 00:14:52.591 "name": "Nvme1", 00:14:52.591 "trtype": "tcp", 00:14:52.591 "traddr": "10.0.0.2", 00:14:52.591 "adrfam": "ipv4", 00:14:52.591 "trsvcid": "4420", 00:14:52.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.591 "hdgst": false, 00:14:52.591 "ddgst": false 00:14:52.591 }, 00:14:52.591 "method": "bdev_nvme_attach_controller" 00:14:52.591 }' 00:14:52.591 [2024-07-12 12:26:21.486800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:52.591 [2024-07-12 12:26:21.486896] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81226 ] 00:14:52.591 [2024-07-12 12:26:21.628316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.849 [2024-07-12 12:26:21.729543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.849 [2024-07-12 12:26:21.729726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.850 [2024-07-12 12:26:21.729727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.850 [2024-07-12 12:26:21.798731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.850 I/O targets: 00:14:52.850 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:52.850 00:14:52.850 00:14:52.850 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.850 http://cunit.sourceforge.net/ 00:14:52.850 00:14:52.850 00:14:52.850 Suite: bdevio tests on: Nvme1n1 00:14:52.850 Test: blockdev write read block ...passed 00:14:52.850 Test: blockdev write zeroes read block ...passed 00:14:52.850 Test: blockdev write zeroes read no split ...passed 00:14:53.108 Test: blockdev write zeroes read split ...passed 00:14:53.108 Test: blockdev write zeroes read split partial ...passed 00:14:53.108 Test: blockdev reset ...[2024-07-12 12:26:21.953404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:53.108 [2024-07-12 12:26:21.953511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2207cd0 (9): Bad file descriptor 00:14:53.108 [2024-07-12 12:26:21.964504] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:53.108 passed 00:14:53.108 Test: blockdev write read 8 blocks ...passed 00:14:53.108 Test: blockdev write read size > 128k ...passed 00:14:53.108 Test: blockdev write read invalid size ...passed 00:14:53.108 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.108 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.108 Test: blockdev write read max offset ...passed 00:14:53.108 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.108 Test: blockdev writev readv 8 blocks ...passed 00:14:53.108 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.108 Test: blockdev writev readv block ...passed 00:14:53.108 Test: blockdev writev readv size > 128k ...passed 00:14:53.108 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.108 Test: blockdev comparev and writev ...[2024-07-12 12:26:21.972511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.972562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.972588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.972602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.973097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.973133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.973156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.973173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.973812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.973846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.973870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.974326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.974360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.974382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.108 [2024-07-12 12:26:21.974395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:53.108 passed 00:14:53.108 Test: blockdev nvme passthru rw ...passed 00:14:53.108 Test: blockdev nvme passthru vendor specific ...[2024-07-12 12:26:21.975346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.108 [2024-07-12 12:26:21.975377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.975503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.108 [2024-07-12 12:26:21.975533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.975652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.108 [2024-07-12 12:26:21.975672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:53.108 [2024-07-12 12:26:21.975804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.108 [2024-07-12 12:26:21.975830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:53.108 passed 00:14:53.108 Test: blockdev nvme admin passthru ...passed 00:14:53.108 Test: blockdev copy ...passed 00:14:53.108 00:14:53.108 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.108 suites 1 1 n/a 0 0 00:14:53.108 tests 23 23 23 0 0 00:14:53.108 asserts 152 152 152 0 n/a 00:14:53.108 00:14:53.108 Elapsed time = 0.160 seconds 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.108 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.366 rmmod nvme_tcp 00:14:53.366 rmmod nvme_fabrics 00:14:53.366 rmmod nvme_keyring 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 81190 ']' 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 81190 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 81190 ']' 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 81190 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81190 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:53.366 killing process with pid 81190 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81190' 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 81190 00:14:53.366 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 81190 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:53.624 00:14:53.624 real 0m2.755s 00:14:53.624 user 0m9.044s 00:14:53.624 sys 0m0.801s 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.624 ************************************ 00:14:53.624 END TEST nvmf_bdevio 00:14:53.624 12:26:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:53.624 ************************************ 00:14:53.624 12:26:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:53.624 12:26:22 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:53.624 12:26:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:53.624 12:26:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.624 12:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.624 ************************************ 00:14:53.624 START TEST nvmf_auth_target 00:14:53.624 ************************************ 00:14:53.624 12:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:53.882 * Looking for test storage... 00:14:53.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.882 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:53.883 Cannot find device "nvmf_tgt_br" 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.883 Cannot find device "nvmf_tgt_br2" 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:53.883 Cannot find device "nvmf_tgt_br" 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:53.883 Cannot find device "nvmf_tgt_br2" 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.883 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.141 12:26:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:54.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:54.141 00:14:54.141 --- 10.0.0.2 ping statistics --- 00:14:54.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.141 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:54.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:54.141 00:14:54.141 --- 10.0.0.3 ping statistics --- 00:14:54.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.141 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:54.141 00:14:54.141 --- 10.0.0.1 ping statistics --- 00:14:54.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.141 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=81400 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 81400 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 81400 ']' 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.141 12:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=81432 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e407dc83264f0feb69c19a9810bc92d057bd79e69af65dc2 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yRU 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e407dc83264f0feb69c19a9810bc92d057bd79e69af65dc2 0 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e407dc83264f0feb69c19a9810bc92d057bd79e69af65dc2 0 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e407dc83264f0feb69c19a9810bc92d057bd79e69af65dc2 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yRU 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yRU 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.yRU 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7d3ccb3ae32a906f8cddf5fd52d0e612d59edc099af5d978e622de997708b667 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1zH 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7d3ccb3ae32a906f8cddf5fd52d0e612d59edc099af5d978e622de997708b667 3 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7d3ccb3ae32a906f8cddf5fd52d0e612d59edc099af5d978e622de997708b667 3 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7d3ccb3ae32a906f8cddf5fd52d0e612d59edc099af5d978e622de997708b667 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1zH 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1zH 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.1zH 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0d126b1f21fd31fccbfc7d49637367e0 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KKb 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0d126b1f21fd31fccbfc7d49637367e0 1 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0d126b1f21fd31fccbfc7d49637367e0 1 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0d126b1f21fd31fccbfc7d49637367e0 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KKb 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KKb 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.KKb 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:55.515 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f6a83a3855ecf7f246fcb4a0dac26ac3a1f2d7fe8c442e6f 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.z4O 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f6a83a3855ecf7f246fcb4a0dac26ac3a1f2d7fe8c442e6f 2 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f6a83a3855ecf7f246fcb4a0dac26ac3a1f2d7fe8c442e6f 2 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f6a83a3855ecf7f246fcb4a0dac26ac3a1f2d7fe8c442e6f 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.z4O 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.z4O 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.z4O 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87a107650cee71ebe88ee4a73bd5c6e6bf6a5ec651356736 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.N7L 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87a107650cee71ebe88ee4a73bd5c6e6bf6a5ec651356736 2 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87a107650cee71ebe88ee4a73bd5c6e6bf6a5ec651356736 2 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87a107650cee71ebe88ee4a73bd5c6e6bf6a5ec651356736 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.N7L 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.N7L 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.N7L 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:55.516 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:55.773 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3c664be1e88baaf69d2c90208feaf622 00:14:55.773 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vvf 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3c664be1e88baaf69d2c90208feaf622 1 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3c664be1e88baaf69d2c90208feaf622 1 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3c664be1e88baaf69d2c90208feaf622 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vvf 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vvf 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.vvf 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6c8b052dd352d39f359cae0f399cf42c42e42d69a95bef49a1681ef7b1f80cd9 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Hs4 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6c8b052dd352d39f359cae0f399cf42c42e42d69a95bef49a1681ef7b1f80cd9 3 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6c8b052dd352d39f359cae0f399cf42c42e42d69a95bef49a1681ef7b1f80cd9 3 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6c8b052dd352d39f359cae0f399cf42c42e42d69a95bef49a1681ef7b1f80cd9 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Hs4 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Hs4 00:14:55.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Hs4 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 81400 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 81400 ']' 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.774 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 81432 /var/tmp/host.sock 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 81432 ']' 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.032 12:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yRU 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yRU 00:14:56.289 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yRU 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.1zH ]] 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1zH 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1zH 00:14:56.546 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1zH 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.KKb 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.KKb 00:14:56.804 12:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.KKb 00:14:57.061 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.z4O ]] 00:14:57.061 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4O 00:14:57.061 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.062 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.062 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.062 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4O 00:14:57.062 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z4O 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.N7L 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.N7L 00:14:57.319 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.N7L 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.vvf ]] 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vvf 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vvf 00:14:57.577 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vvf 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Hs4 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Hs4 00:14:57.836 12:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Hs4 00:14:58.119 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:58.119 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:58.119 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.119 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.119 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.119 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.407 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.665 00:14:58.665 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.665 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.665 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.923 { 00:14:58.923 "cntlid": 1, 00:14:58.923 "qid": 0, 00:14:58.923 "state": "enabled", 00:14:58.923 "thread": "nvmf_tgt_poll_group_000", 00:14:58.923 "listen_address": { 00:14:58.923 "trtype": "TCP", 00:14:58.923 "adrfam": "IPv4", 00:14:58.923 "traddr": "10.0.0.2", 00:14:58.923 "trsvcid": "4420" 00:14:58.923 }, 00:14:58.923 "peer_address": { 00:14:58.923 "trtype": "TCP", 00:14:58.923 "adrfam": "IPv4", 00:14:58.923 "traddr": "10.0.0.1", 00:14:58.923 "trsvcid": "56680" 00:14:58.923 }, 00:14:58.923 "auth": { 00:14:58.923 "state": "completed", 00:14:58.923 "digest": "sha256", 00:14:58.923 "dhgroup": "null" 00:14:58.923 } 00:14:58.923 } 00:14:58.923 ]' 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.923 12:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.923 12:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:58.923 12:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.180 12:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.180 12:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.180 12:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.437 12:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.692 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.693 12:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.693 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.693 { 00:15:04.693 "cntlid": 3, 00:15:04.693 "qid": 0, 00:15:04.693 "state": "enabled", 00:15:04.693 "thread": "nvmf_tgt_poll_group_000", 00:15:04.693 "listen_address": { 00:15:04.693 "trtype": "TCP", 00:15:04.693 "adrfam": "IPv4", 00:15:04.693 "traddr": "10.0.0.2", 00:15:04.693 "trsvcid": "4420" 00:15:04.693 }, 00:15:04.693 "peer_address": { 00:15:04.693 "trtype": "TCP", 00:15:04.693 "adrfam": "IPv4", 00:15:04.693 "traddr": "10.0.0.1", 00:15:04.693 "trsvcid": "56702" 00:15:04.693 }, 00:15:04.693 "auth": { 00:15:04.693 "state": "completed", 00:15:04.693 "digest": "sha256", 00:15:04.693 "dhgroup": "null" 00:15:04.693 } 00:15:04.693 } 00:15:04.693 ]' 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:04.693 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.950 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.950 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.950 12:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.207 12:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.783 12:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.040 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.041 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.296 00:15:06.296 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.296 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.296 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.553 { 00:15:06.553 "cntlid": 5, 00:15:06.553 "qid": 0, 00:15:06.553 "state": "enabled", 00:15:06.553 "thread": "nvmf_tgt_poll_group_000", 00:15:06.553 "listen_address": { 00:15:06.553 "trtype": "TCP", 00:15:06.553 "adrfam": "IPv4", 00:15:06.553 "traddr": "10.0.0.2", 00:15:06.553 "trsvcid": "4420" 00:15:06.553 }, 00:15:06.553 "peer_address": { 00:15:06.553 "trtype": "TCP", 00:15:06.553 "adrfam": "IPv4", 00:15:06.553 "traddr": "10.0.0.1", 00:15:06.553 "trsvcid": "51396" 00:15:06.553 }, 00:15:06.553 "auth": { 00:15:06.553 "state": "completed", 00:15:06.553 "digest": "sha256", 00:15:06.553 "dhgroup": "null" 00:15:06.553 } 00:15:06.553 } 00:15:06.553 ]' 00:15:06.553 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.810 12:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.067 12:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.998 12:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.255 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.512 00:15:08.512 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.512 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.512 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.770 { 00:15:08.770 "cntlid": 7, 00:15:08.770 "qid": 0, 00:15:08.770 "state": "enabled", 00:15:08.770 "thread": "nvmf_tgt_poll_group_000", 00:15:08.770 "listen_address": { 00:15:08.770 "trtype": "TCP", 00:15:08.770 "adrfam": "IPv4", 00:15:08.770 "traddr": "10.0.0.2", 00:15:08.770 "trsvcid": "4420" 00:15:08.770 }, 00:15:08.770 "peer_address": { 00:15:08.770 "trtype": "TCP", 00:15:08.770 "adrfam": "IPv4", 00:15:08.770 "traddr": "10.0.0.1", 00:15:08.770 "trsvcid": "51418" 00:15:08.770 }, 00:15:08.770 "auth": { 00:15:08.770 "state": "completed", 00:15:08.770 "digest": "sha256", 00:15:08.770 "dhgroup": "null" 00:15:08.770 } 00:15:08.770 } 00:15:08.770 ]' 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:08.770 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.027 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.027 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.027 12:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.284 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.848 12:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.105 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.363 00:15:10.363 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.363 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.363 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.928 { 00:15:10.928 "cntlid": 9, 00:15:10.928 "qid": 0, 00:15:10.928 "state": "enabled", 00:15:10.928 "thread": "nvmf_tgt_poll_group_000", 00:15:10.928 "listen_address": { 00:15:10.928 "trtype": "TCP", 00:15:10.928 "adrfam": "IPv4", 00:15:10.928 "traddr": "10.0.0.2", 00:15:10.928 "trsvcid": "4420" 00:15:10.928 }, 00:15:10.928 "peer_address": { 00:15:10.928 "trtype": "TCP", 00:15:10.928 "adrfam": "IPv4", 00:15:10.928 "traddr": "10.0.0.1", 00:15:10.928 "trsvcid": "51450" 00:15:10.928 }, 00:15:10.928 "auth": { 00:15:10.928 "state": "completed", 00:15:10.928 "digest": "sha256", 00:15:10.928 "dhgroup": "ffdhe2048" 00:15:10.928 } 00:15:10.928 } 00:15:10.928 ]' 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.928 12:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.185 12:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.126 12:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.126 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.382 00:15:12.382 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.382 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.382 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.639 { 00:15:12.639 "cntlid": 11, 00:15:12.639 "qid": 0, 00:15:12.639 "state": "enabled", 00:15:12.639 "thread": "nvmf_tgt_poll_group_000", 00:15:12.639 "listen_address": { 00:15:12.639 "trtype": "TCP", 00:15:12.639 "adrfam": "IPv4", 00:15:12.639 "traddr": "10.0.0.2", 00:15:12.639 "trsvcid": "4420" 00:15:12.639 }, 00:15:12.639 "peer_address": { 00:15:12.639 "trtype": "TCP", 00:15:12.639 "adrfam": "IPv4", 00:15:12.639 "traddr": "10.0.0.1", 00:15:12.639 "trsvcid": "51486" 00:15:12.639 }, 00:15:12.639 "auth": { 00:15:12.639 "state": "completed", 00:15:12.639 "digest": "sha256", 00:15:12.639 "dhgroup": "ffdhe2048" 00:15:12.639 } 00:15:12.639 } 00:15:12.639 ]' 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.639 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.640 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.896 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.896 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.896 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.896 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.896 12:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.153 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.717 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.975 12:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.539 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.539 12:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.841 { 00:15:14.841 "cntlid": 13, 00:15:14.841 "qid": 0, 00:15:14.841 "state": "enabled", 00:15:14.841 "thread": "nvmf_tgt_poll_group_000", 00:15:14.841 "listen_address": { 00:15:14.841 "trtype": "TCP", 00:15:14.841 "adrfam": "IPv4", 00:15:14.841 "traddr": "10.0.0.2", 00:15:14.841 "trsvcid": "4420" 00:15:14.841 }, 00:15:14.841 "peer_address": { 00:15:14.841 "trtype": "TCP", 00:15:14.841 "adrfam": "IPv4", 00:15:14.841 "traddr": "10.0.0.1", 00:15:14.841 "trsvcid": "51522" 00:15:14.841 }, 00:15:14.841 "auth": { 00:15:14.841 "state": "completed", 00:15:14.841 "digest": "sha256", 00:15:14.841 "dhgroup": "ffdhe2048" 00:15:14.841 } 00:15:14.841 } 00:15:14.841 ]' 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.841 12:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.099 12:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:15.665 12:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.923 12:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.181 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.439 00:15:16.439 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.439 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.439 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.697 { 00:15:16.697 "cntlid": 15, 00:15:16.697 "qid": 0, 00:15:16.697 "state": "enabled", 00:15:16.697 "thread": "nvmf_tgt_poll_group_000", 00:15:16.697 "listen_address": { 00:15:16.697 "trtype": "TCP", 00:15:16.697 "adrfam": "IPv4", 00:15:16.697 "traddr": "10.0.0.2", 00:15:16.697 "trsvcid": "4420" 00:15:16.697 }, 00:15:16.697 "peer_address": { 00:15:16.697 "trtype": "TCP", 00:15:16.697 "adrfam": "IPv4", 00:15:16.697 "traddr": "10.0.0.1", 00:15:16.697 "trsvcid": "58674" 00:15:16.697 }, 00:15:16.697 "auth": { 00:15:16.697 "state": "completed", 00:15:16.697 "digest": "sha256", 00:15:16.697 "dhgroup": "ffdhe2048" 00:15:16.697 } 00:15:16.697 } 00:15:16.697 ]' 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.697 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.955 12:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.889 12:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.147 00:15:18.404 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.405 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.405 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.662 { 00:15:18.662 "cntlid": 17, 00:15:18.662 "qid": 0, 00:15:18.662 "state": "enabled", 00:15:18.662 "thread": "nvmf_tgt_poll_group_000", 00:15:18.662 "listen_address": { 00:15:18.662 "trtype": "TCP", 00:15:18.662 "adrfam": "IPv4", 00:15:18.662 "traddr": "10.0.0.2", 00:15:18.662 "trsvcid": "4420" 00:15:18.662 }, 00:15:18.662 "peer_address": { 00:15:18.662 "trtype": "TCP", 00:15:18.662 "adrfam": "IPv4", 00:15:18.662 "traddr": "10.0.0.1", 00:15:18.662 "trsvcid": "58702" 00:15:18.662 }, 00:15:18.662 "auth": { 00:15:18.662 "state": "completed", 00:15:18.662 "digest": "sha256", 00:15:18.662 "dhgroup": "ffdhe3072" 00:15:18.662 } 00:15:18.662 } 00:15:18.662 ]' 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.662 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.930 12:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:19.496 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.497 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.062 12:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.321 00:15:20.321 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.321 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.321 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.580 { 00:15:20.580 "cntlid": 19, 00:15:20.580 "qid": 0, 00:15:20.580 "state": "enabled", 00:15:20.580 "thread": "nvmf_tgt_poll_group_000", 00:15:20.580 "listen_address": { 00:15:20.580 "trtype": "TCP", 00:15:20.580 "adrfam": "IPv4", 00:15:20.580 "traddr": "10.0.0.2", 00:15:20.580 "trsvcid": "4420" 00:15:20.580 }, 00:15:20.580 "peer_address": { 00:15:20.580 "trtype": "TCP", 00:15:20.580 "adrfam": "IPv4", 00:15:20.580 "traddr": "10.0.0.1", 00:15:20.580 "trsvcid": "58712" 00:15:20.580 }, 00:15:20.580 "auth": { 00:15:20.580 "state": "completed", 00:15:20.580 "digest": "sha256", 00:15:20.580 "dhgroup": "ffdhe3072" 00:15:20.580 } 00:15:20.580 } 00:15:20.580 ]' 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.580 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.151 12:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.714 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.972 12:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.538 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.538 { 00:15:22.538 "cntlid": 21, 00:15:22.538 "qid": 0, 00:15:22.538 "state": "enabled", 00:15:22.538 "thread": "nvmf_tgt_poll_group_000", 00:15:22.538 "listen_address": { 00:15:22.538 "trtype": "TCP", 00:15:22.538 "adrfam": "IPv4", 00:15:22.538 "traddr": "10.0.0.2", 00:15:22.538 "trsvcid": "4420" 00:15:22.538 }, 00:15:22.538 "peer_address": { 00:15:22.538 "trtype": "TCP", 00:15:22.538 "adrfam": "IPv4", 00:15:22.538 "traddr": "10.0.0.1", 00:15:22.538 "trsvcid": "58740" 00:15:22.538 }, 00:15:22.538 "auth": { 00:15:22.538 "state": "completed", 00:15:22.538 "digest": "sha256", 00:15:22.538 "dhgroup": "ffdhe3072" 00:15:22.538 } 00:15:22.538 } 00:15:22.538 ]' 00:15:22.538 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.796 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.133 12:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.697 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.956 12:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.215 00:15:24.473 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.473 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.473 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.731 { 00:15:24.731 "cntlid": 23, 00:15:24.731 "qid": 0, 00:15:24.731 "state": "enabled", 00:15:24.731 "thread": "nvmf_tgt_poll_group_000", 00:15:24.731 "listen_address": { 00:15:24.731 "trtype": "TCP", 00:15:24.731 "adrfam": "IPv4", 00:15:24.731 "traddr": "10.0.0.2", 00:15:24.731 "trsvcid": "4420" 00:15:24.731 }, 00:15:24.731 "peer_address": { 00:15:24.731 "trtype": "TCP", 00:15:24.731 "adrfam": "IPv4", 00:15:24.731 "traddr": "10.0.0.1", 00:15:24.731 "trsvcid": "58760" 00:15:24.731 }, 00:15:24.731 "auth": { 00:15:24.731 "state": "completed", 00:15:24.731 "digest": "sha256", 00:15:24.731 "dhgroup": "ffdhe3072" 00:15:24.731 } 00:15:24.731 } 00:15:24.731 ]' 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.731 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.989 12:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.555 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.121 12:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.378 00:15:26.378 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.378 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.378 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.637 { 00:15:26.637 "cntlid": 25, 00:15:26.637 "qid": 0, 00:15:26.637 "state": "enabled", 00:15:26.637 "thread": "nvmf_tgt_poll_group_000", 00:15:26.637 "listen_address": { 00:15:26.637 "trtype": "TCP", 00:15:26.637 "adrfam": "IPv4", 00:15:26.637 "traddr": "10.0.0.2", 00:15:26.637 "trsvcid": "4420" 00:15:26.637 }, 00:15:26.637 "peer_address": { 00:15:26.637 "trtype": "TCP", 00:15:26.637 "adrfam": "IPv4", 00:15:26.637 "traddr": "10.0.0.1", 00:15:26.637 "trsvcid": "51850" 00:15:26.637 }, 00:15:26.637 "auth": { 00:15:26.637 "state": "completed", 00:15:26.637 "digest": "sha256", 00:15:26.637 "dhgroup": "ffdhe4096" 00:15:26.637 } 00:15:26.637 } 00:15:26.637 ]' 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.637 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.895 12:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.828 12:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.392 00:15:28.392 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.392 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.392 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.650 { 00:15:28.650 "cntlid": 27, 00:15:28.650 "qid": 0, 00:15:28.650 "state": "enabled", 00:15:28.650 "thread": "nvmf_tgt_poll_group_000", 00:15:28.650 "listen_address": { 00:15:28.650 "trtype": "TCP", 00:15:28.650 "adrfam": "IPv4", 00:15:28.650 "traddr": "10.0.0.2", 00:15:28.650 "trsvcid": "4420" 00:15:28.650 }, 00:15:28.650 "peer_address": { 00:15:28.650 "trtype": "TCP", 00:15:28.650 "adrfam": "IPv4", 00:15:28.650 "traddr": "10.0.0.1", 00:15:28.650 "trsvcid": "51874" 00:15:28.650 }, 00:15:28.650 "auth": { 00:15:28.650 "state": "completed", 00:15:28.650 "digest": "sha256", 00:15:28.650 "dhgroup": "ffdhe4096" 00:15:28.650 } 00:15:28.650 } 00:15:28.650 ]' 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.650 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.907 12:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.852 12:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.112 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.371 12:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.629 { 00:15:30.629 "cntlid": 29, 00:15:30.629 "qid": 0, 00:15:30.629 "state": "enabled", 00:15:30.629 "thread": "nvmf_tgt_poll_group_000", 00:15:30.629 "listen_address": { 00:15:30.629 "trtype": "TCP", 00:15:30.629 "adrfam": "IPv4", 00:15:30.629 "traddr": "10.0.0.2", 00:15:30.629 "trsvcid": "4420" 00:15:30.629 }, 00:15:30.629 "peer_address": { 00:15:30.629 "trtype": "TCP", 00:15:30.629 "adrfam": "IPv4", 00:15:30.629 "traddr": "10.0.0.1", 00:15:30.629 "trsvcid": "51894" 00:15:30.629 }, 00:15:30.629 "auth": { 00:15:30.629 "state": "completed", 00:15:30.629 "digest": "sha256", 00:15:30.629 "dhgroup": "ffdhe4096" 00:15:30.629 } 00:15:30.629 } 00:15:30.629 ]' 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.629 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.887 12:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:31.452 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.711 12:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.969 12:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.969 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.969 12:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.226 00:15:32.226 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.226 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.226 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.484 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.484 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.484 12:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.485 12:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.485 12:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.485 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.485 { 00:15:32.485 "cntlid": 31, 00:15:32.485 "qid": 0, 00:15:32.485 "state": "enabled", 00:15:32.485 "thread": "nvmf_tgt_poll_group_000", 00:15:32.485 "listen_address": { 00:15:32.485 "trtype": "TCP", 00:15:32.485 "adrfam": "IPv4", 00:15:32.485 "traddr": "10.0.0.2", 00:15:32.485 "trsvcid": "4420" 00:15:32.485 }, 00:15:32.485 "peer_address": { 00:15:32.485 "trtype": "TCP", 00:15:32.485 "adrfam": "IPv4", 00:15:32.485 "traddr": "10.0.0.1", 00:15:32.485 "trsvcid": "51916" 00:15:32.485 }, 00:15:32.485 "auth": { 00:15:32.485 "state": "completed", 00:15:32.485 "digest": "sha256", 00:15:32.485 "dhgroup": "ffdhe4096" 00:15:32.485 } 00:15:32.485 } 00:15:32.485 ]' 00:15:32.485 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.485 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.485 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.760 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.760 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.760 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.760 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.760 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.018 12:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.583 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.841 12:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.404 00:15:34.404 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.404 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.404 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.662 { 00:15:34.662 "cntlid": 33, 00:15:34.662 "qid": 0, 00:15:34.662 "state": "enabled", 00:15:34.662 "thread": "nvmf_tgt_poll_group_000", 00:15:34.662 "listen_address": { 00:15:34.662 "trtype": "TCP", 00:15:34.662 "adrfam": "IPv4", 00:15:34.662 "traddr": "10.0.0.2", 00:15:34.662 "trsvcid": "4420" 00:15:34.662 }, 00:15:34.662 "peer_address": { 00:15:34.662 "trtype": "TCP", 00:15:34.662 "adrfam": "IPv4", 00:15:34.662 "traddr": "10.0.0.1", 00:15:34.662 "trsvcid": "51930" 00:15:34.662 }, 00:15:34.662 "auth": { 00:15:34.662 "state": "completed", 00:15:34.662 "digest": "sha256", 00:15:34.662 "dhgroup": "ffdhe6144" 00:15:34.662 } 00:15:34.662 } 00:15:34.662 ]' 00:15:34.662 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.663 12:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.229 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:35.795 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.054 12:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.054 12:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.054 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.054 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.620 00:15:36.620 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.620 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.620 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.876 { 00:15:36.876 "cntlid": 35, 00:15:36.876 "qid": 0, 00:15:36.876 "state": "enabled", 00:15:36.876 "thread": "nvmf_tgt_poll_group_000", 00:15:36.876 "listen_address": { 00:15:36.876 "trtype": "TCP", 00:15:36.876 "adrfam": "IPv4", 00:15:36.876 "traddr": "10.0.0.2", 00:15:36.876 "trsvcid": "4420" 00:15:36.876 }, 00:15:36.876 "peer_address": { 00:15:36.876 "trtype": "TCP", 00:15:36.876 "adrfam": "IPv4", 00:15:36.876 "traddr": "10.0.0.1", 00:15:36.876 "trsvcid": "59356" 00:15:36.876 }, 00:15:36.876 "auth": { 00:15:36.876 "state": "completed", 00:15:36.876 "digest": "sha256", 00:15:36.876 "dhgroup": "ffdhe6144" 00:15:36.876 } 00:15:36.876 } 00:15:36.876 ]' 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.876 12:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.133 12:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.066 12:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.066 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.632 00:15:38.632 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.632 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.632 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.891 { 00:15:38.891 "cntlid": 37, 00:15:38.891 "qid": 0, 00:15:38.891 "state": "enabled", 00:15:38.891 "thread": "nvmf_tgt_poll_group_000", 00:15:38.891 "listen_address": { 00:15:38.891 "trtype": "TCP", 00:15:38.891 "adrfam": "IPv4", 00:15:38.891 "traddr": "10.0.0.2", 00:15:38.891 "trsvcid": "4420" 00:15:38.891 }, 00:15:38.891 "peer_address": { 00:15:38.891 "trtype": "TCP", 00:15:38.891 "adrfam": "IPv4", 00:15:38.891 "traddr": "10.0.0.1", 00:15:38.891 "trsvcid": "59384" 00:15:38.891 }, 00:15:38.891 "auth": { 00:15:38.891 "state": "completed", 00:15:38.891 "digest": "sha256", 00:15:38.891 "dhgroup": "ffdhe6144" 00:15:38.891 } 00:15:38.891 } 00:15:38.891 ]' 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.891 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.149 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.149 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.149 12:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.149 12:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.083 12:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.083 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.648 00:15:40.648 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.648 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.648 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.905 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.905 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.905 12:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.905 12:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.905 12:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.905 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.905 { 00:15:40.905 "cntlid": 39, 00:15:40.905 "qid": 0, 00:15:40.905 "state": "enabled", 00:15:40.905 "thread": "nvmf_tgt_poll_group_000", 00:15:40.905 "listen_address": { 00:15:40.905 "trtype": "TCP", 00:15:40.905 "adrfam": "IPv4", 00:15:40.905 "traddr": "10.0.0.2", 00:15:40.905 "trsvcid": "4420" 00:15:40.905 }, 00:15:40.905 "peer_address": { 00:15:40.905 "trtype": "TCP", 00:15:40.905 "adrfam": "IPv4", 00:15:40.905 "traddr": "10.0.0.1", 00:15:40.905 "trsvcid": "59414" 00:15:40.905 }, 00:15:40.906 "auth": { 00:15:40.906 "state": "completed", 00:15:40.906 "digest": "sha256", 00:15:40.906 "dhgroup": "ffdhe6144" 00:15:40.906 } 00:15:40.906 } 00:15:40.906 ]' 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.906 12:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.163 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:42.135 12:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.135 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.700 00:15:42.700 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.700 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.700 12:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.262 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.262 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.262 12:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.262 12:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.262 12:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.262 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.262 { 00:15:43.262 "cntlid": 41, 00:15:43.262 "qid": 0, 00:15:43.262 "state": "enabled", 00:15:43.263 "thread": "nvmf_tgt_poll_group_000", 00:15:43.263 "listen_address": { 00:15:43.263 "trtype": "TCP", 00:15:43.263 "adrfam": "IPv4", 00:15:43.263 "traddr": "10.0.0.2", 00:15:43.263 "trsvcid": "4420" 00:15:43.263 }, 00:15:43.263 "peer_address": { 00:15:43.263 "trtype": "TCP", 00:15:43.263 "adrfam": "IPv4", 00:15:43.263 "traddr": "10.0.0.1", 00:15:43.263 "trsvcid": "59432" 00:15:43.263 }, 00:15:43.263 "auth": { 00:15:43.263 "state": "completed", 00:15:43.263 "digest": "sha256", 00:15:43.263 "dhgroup": "ffdhe8192" 00:15:43.263 } 00:15:43.263 } 00:15:43.263 ]' 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.263 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.520 12:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.461 12:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.036 00:15:45.036 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.036 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.036 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.601 { 00:15:45.601 "cntlid": 43, 00:15:45.601 "qid": 0, 00:15:45.601 "state": "enabled", 00:15:45.601 "thread": "nvmf_tgt_poll_group_000", 00:15:45.601 "listen_address": { 00:15:45.601 "trtype": "TCP", 00:15:45.601 "adrfam": "IPv4", 00:15:45.601 "traddr": "10.0.0.2", 00:15:45.601 "trsvcid": "4420" 00:15:45.601 }, 00:15:45.601 "peer_address": { 00:15:45.601 "trtype": "TCP", 00:15:45.601 "adrfam": "IPv4", 00:15:45.601 "traddr": "10.0.0.1", 00:15:45.601 "trsvcid": "59462" 00:15:45.601 }, 00:15:45.601 "auth": { 00:15:45.601 "state": "completed", 00:15:45.601 "digest": "sha256", 00:15:45.601 "dhgroup": "ffdhe8192" 00:15:45.601 } 00:15:45.601 } 00:15:45.601 ]' 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.601 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.859 12:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.423 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.681 12:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.682 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.682 12:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.245 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.502 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.502 { 00:15:47.502 "cntlid": 45, 00:15:47.502 "qid": 0, 00:15:47.502 "state": "enabled", 00:15:47.502 "thread": "nvmf_tgt_poll_group_000", 00:15:47.502 "listen_address": { 00:15:47.502 "trtype": "TCP", 00:15:47.502 "adrfam": "IPv4", 00:15:47.502 "traddr": "10.0.0.2", 00:15:47.502 "trsvcid": "4420" 00:15:47.502 }, 00:15:47.502 "peer_address": { 00:15:47.502 "trtype": "TCP", 00:15:47.502 "adrfam": "IPv4", 00:15:47.502 "traddr": "10.0.0.1", 00:15:47.502 "trsvcid": "44012" 00:15:47.502 }, 00:15:47.502 "auth": { 00:15:47.502 "state": "completed", 00:15:47.502 "digest": "sha256", 00:15:47.502 "dhgroup": "ffdhe8192" 00:15:47.502 } 00:15:47.502 } 00:15:47.502 ]' 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.759 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.060 12:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.635 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.893 12:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.457 00:15:49.457 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.457 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.457 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.715 { 00:15:49.715 "cntlid": 47, 00:15:49.715 "qid": 0, 00:15:49.715 "state": "enabled", 00:15:49.715 "thread": "nvmf_tgt_poll_group_000", 00:15:49.715 "listen_address": { 00:15:49.715 "trtype": "TCP", 00:15:49.715 "adrfam": "IPv4", 00:15:49.715 "traddr": "10.0.0.2", 00:15:49.715 "trsvcid": "4420" 00:15:49.715 }, 00:15:49.715 "peer_address": { 00:15:49.715 "trtype": "TCP", 00:15:49.715 "adrfam": "IPv4", 00:15:49.715 "traddr": "10.0.0.1", 00:15:49.715 "trsvcid": "44046" 00:15:49.715 }, 00:15:49.715 "auth": { 00:15:49.715 "state": "completed", 00:15:49.715 "digest": "sha256", 00:15:49.715 "dhgroup": "ffdhe8192" 00:15:49.715 } 00:15:49.715 } 00:15:49.715 ]' 00:15:49.715 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.973 12:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.230 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.163 12:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.421 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.679 00:15:51.679 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.679 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.679 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.936 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.936 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.936 12:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.936 12:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.936 12:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.937 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.937 { 00:15:51.937 "cntlid": 49, 00:15:51.937 "qid": 0, 00:15:51.937 "state": "enabled", 00:15:51.937 "thread": "nvmf_tgt_poll_group_000", 00:15:51.937 "listen_address": { 00:15:51.937 "trtype": "TCP", 00:15:51.937 "adrfam": "IPv4", 00:15:51.937 "traddr": "10.0.0.2", 00:15:51.937 "trsvcid": "4420" 00:15:51.937 }, 00:15:51.937 "peer_address": { 00:15:51.937 "trtype": "TCP", 00:15:51.937 "adrfam": "IPv4", 00:15:51.937 "traddr": "10.0.0.1", 00:15:51.937 "trsvcid": "44076" 00:15:51.937 }, 00:15:51.937 "auth": { 00:15:51.937 "state": "completed", 00:15:51.937 "digest": "sha384", 00:15:51.937 "dhgroup": "null" 00:15:51.937 } 00:15:51.937 } 00:15:51.937 ]' 00:15:51.937 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.937 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.937 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.937 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:51.937 12:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.194 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.194 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.194 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.452 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.017 12:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.274 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.532 00:15:53.532 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.532 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.532 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.789 { 00:15:53.789 "cntlid": 51, 00:15:53.789 "qid": 0, 00:15:53.789 "state": "enabled", 00:15:53.789 "thread": "nvmf_tgt_poll_group_000", 00:15:53.789 "listen_address": { 00:15:53.789 "trtype": "TCP", 00:15:53.789 "adrfam": "IPv4", 00:15:53.789 "traddr": "10.0.0.2", 00:15:53.789 "trsvcid": "4420" 00:15:53.789 }, 00:15:53.789 "peer_address": { 00:15:53.789 "trtype": "TCP", 00:15:53.789 "adrfam": "IPv4", 00:15:53.789 "traddr": "10.0.0.1", 00:15:53.789 "trsvcid": "44106" 00:15:53.789 }, 00:15:53.789 "auth": { 00:15:53.789 "state": "completed", 00:15:53.789 "digest": "sha384", 00:15:53.789 "dhgroup": "null" 00:15:53.789 } 00:15:53.789 } 00:15:53.789 ]' 00:15:53.789 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.098 12:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.355 12:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.920 12:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.177 12:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.178 12:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.178 12:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.178 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.178 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.435 00:15:55.435 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.435 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.435 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.693 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.693 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.693 12:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.693 12:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.693 12:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.693 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.693 { 00:15:55.693 "cntlid": 53, 00:15:55.693 "qid": 0, 00:15:55.693 "state": "enabled", 00:15:55.693 "thread": "nvmf_tgt_poll_group_000", 00:15:55.693 "listen_address": { 00:15:55.693 "trtype": "TCP", 00:15:55.693 "adrfam": "IPv4", 00:15:55.694 "traddr": "10.0.0.2", 00:15:55.694 "trsvcid": "4420" 00:15:55.694 }, 00:15:55.694 "peer_address": { 00:15:55.694 "trtype": "TCP", 00:15:55.694 "adrfam": "IPv4", 00:15:55.694 "traddr": "10.0.0.1", 00:15:55.694 "trsvcid": "35092" 00:15:55.694 }, 00:15:55.694 "auth": { 00:15:55.694 "state": "completed", 00:15:55.694 "digest": "sha384", 00:15:55.694 "dhgroup": "null" 00:15:55.694 } 00:15:55.694 } 00:15:55.694 ]' 00:15:55.694 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.952 12:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.209 12:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.775 12:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.033 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.291 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.549 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.549 { 00:15:57.549 "cntlid": 55, 00:15:57.549 "qid": 0, 00:15:57.549 "state": "enabled", 00:15:57.549 "thread": "nvmf_tgt_poll_group_000", 00:15:57.549 "listen_address": { 00:15:57.549 "trtype": "TCP", 00:15:57.549 "adrfam": "IPv4", 00:15:57.549 "traddr": "10.0.0.2", 00:15:57.549 "trsvcid": "4420" 00:15:57.549 }, 00:15:57.549 "peer_address": { 00:15:57.549 "trtype": "TCP", 00:15:57.549 "adrfam": "IPv4", 00:15:57.549 "traddr": "10.0.0.1", 00:15:57.549 "trsvcid": "35118" 00:15:57.549 }, 00:15:57.549 "auth": { 00:15:57.549 "state": "completed", 00:15:57.549 "digest": "sha384", 00:15:57.549 "dhgroup": "null" 00:15:57.549 } 00:15:57.549 } 00:15:57.549 ]' 00:15:57.806 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.807 12:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.064 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.630 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.888 12:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.455 00:15:59.455 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.455 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.455 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.713 { 00:15:59.713 "cntlid": 57, 00:15:59.713 "qid": 0, 00:15:59.713 "state": "enabled", 00:15:59.713 "thread": "nvmf_tgt_poll_group_000", 00:15:59.713 "listen_address": { 00:15:59.713 "trtype": "TCP", 00:15:59.713 "adrfam": "IPv4", 00:15:59.713 "traddr": "10.0.0.2", 00:15:59.713 "trsvcid": "4420" 00:15:59.713 }, 00:15:59.713 "peer_address": { 00:15:59.713 "trtype": "TCP", 00:15:59.713 "adrfam": "IPv4", 00:15:59.713 "traddr": "10.0.0.1", 00:15:59.713 "trsvcid": "35138" 00:15:59.713 }, 00:15:59.713 "auth": { 00:15:59.713 "state": "completed", 00:15:59.713 "digest": "sha384", 00:15:59.713 "dhgroup": "ffdhe2048" 00:15:59.713 } 00:15:59.713 } 00:15:59.713 ]' 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.713 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.971 12:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.535 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.792 12:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.050 00:16:01.050 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.050 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.050 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.615 { 00:16:01.615 "cntlid": 59, 00:16:01.615 "qid": 0, 00:16:01.615 "state": "enabled", 00:16:01.615 "thread": "nvmf_tgt_poll_group_000", 00:16:01.615 "listen_address": { 00:16:01.615 "trtype": "TCP", 00:16:01.615 "adrfam": "IPv4", 00:16:01.615 "traddr": "10.0.0.2", 00:16:01.615 "trsvcid": "4420" 00:16:01.615 }, 00:16:01.615 "peer_address": { 00:16:01.615 "trtype": "TCP", 00:16:01.615 "adrfam": "IPv4", 00:16:01.615 "traddr": "10.0.0.1", 00:16:01.615 "trsvcid": "35162" 00:16:01.615 }, 00:16:01.615 "auth": { 00:16:01.615 "state": "completed", 00:16:01.615 "digest": "sha384", 00:16:01.615 "dhgroup": "ffdhe2048" 00:16:01.615 } 00:16:01.615 } 00:16:01.615 ]' 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.615 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.873 12:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.437 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.695 12:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.953 00:16:02.953 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.953 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.953 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.238 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.238 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.238 12:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.238 12:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.497 { 00:16:03.497 "cntlid": 61, 00:16:03.497 "qid": 0, 00:16:03.497 "state": "enabled", 00:16:03.497 "thread": "nvmf_tgt_poll_group_000", 00:16:03.497 "listen_address": { 00:16:03.497 "trtype": "TCP", 00:16:03.497 "adrfam": "IPv4", 00:16:03.497 "traddr": "10.0.0.2", 00:16:03.497 "trsvcid": "4420" 00:16:03.497 }, 00:16:03.497 "peer_address": { 00:16:03.497 "trtype": "TCP", 00:16:03.497 "adrfam": "IPv4", 00:16:03.497 "traddr": "10.0.0.1", 00:16:03.497 "trsvcid": "35184" 00:16:03.497 }, 00:16:03.497 "auth": { 00:16:03.497 "state": "completed", 00:16:03.497 "digest": "sha384", 00:16:03.497 "dhgroup": "ffdhe2048" 00:16:03.497 } 00:16:03.497 } 00:16:03.497 ]' 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.497 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.755 12:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.321 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.580 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.837 00:16:04.837 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.837 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.837 12:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.094 { 00:16:05.094 "cntlid": 63, 00:16:05.094 "qid": 0, 00:16:05.094 "state": "enabled", 00:16:05.094 "thread": "nvmf_tgt_poll_group_000", 00:16:05.094 "listen_address": { 00:16:05.094 "trtype": "TCP", 00:16:05.094 "adrfam": "IPv4", 00:16:05.094 "traddr": "10.0.0.2", 00:16:05.094 "trsvcid": "4420" 00:16:05.094 }, 00:16:05.094 "peer_address": { 00:16:05.094 "trtype": "TCP", 00:16:05.094 "adrfam": "IPv4", 00:16:05.094 "traddr": "10.0.0.1", 00:16:05.094 "trsvcid": "59768" 00:16:05.094 }, 00:16:05.094 "auth": { 00:16:05.094 "state": "completed", 00:16:05.094 "digest": "sha384", 00:16:05.094 "dhgroup": "ffdhe2048" 00:16:05.094 } 00:16:05.094 } 00:16:05.094 ]' 00:16:05.094 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.351 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.608 12:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.542 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.799 00:16:06.799 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.799 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.799 12:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.365 { 00:16:07.365 "cntlid": 65, 00:16:07.365 "qid": 0, 00:16:07.365 "state": "enabled", 00:16:07.365 "thread": "nvmf_tgt_poll_group_000", 00:16:07.365 "listen_address": { 00:16:07.365 "trtype": "TCP", 00:16:07.365 "adrfam": "IPv4", 00:16:07.365 "traddr": "10.0.0.2", 00:16:07.365 "trsvcid": "4420" 00:16:07.365 }, 00:16:07.365 "peer_address": { 00:16:07.365 "trtype": "TCP", 00:16:07.365 "adrfam": "IPv4", 00:16:07.365 "traddr": "10.0.0.1", 00:16:07.365 "trsvcid": "59800" 00:16:07.365 }, 00:16:07.365 "auth": { 00:16:07.365 "state": "completed", 00:16:07.365 "digest": "sha384", 00:16:07.365 "dhgroup": "ffdhe3072" 00:16:07.365 } 00:16:07.365 } 00:16:07.365 ]' 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.365 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.622 12:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.556 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.814 12:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.072 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.330 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.330 { 00:16:09.330 "cntlid": 67, 00:16:09.330 "qid": 0, 00:16:09.330 "state": "enabled", 00:16:09.330 "thread": "nvmf_tgt_poll_group_000", 00:16:09.330 "listen_address": { 00:16:09.330 "trtype": "TCP", 00:16:09.330 "adrfam": "IPv4", 00:16:09.330 "traddr": "10.0.0.2", 00:16:09.330 "trsvcid": "4420" 00:16:09.330 }, 00:16:09.330 "peer_address": { 00:16:09.330 "trtype": "TCP", 00:16:09.330 "adrfam": "IPv4", 00:16:09.330 "traddr": "10.0.0.1", 00:16:09.330 "trsvcid": "59834" 00:16:09.330 }, 00:16:09.330 "auth": { 00:16:09.330 "state": "completed", 00:16:09.330 "digest": "sha384", 00:16:09.330 "dhgroup": "ffdhe3072" 00:16:09.330 } 00:16:09.330 } 00:16:09.330 ]' 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.588 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.846 12:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:10.777 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.777 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:10.777 12:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.777 12:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.777 12:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.778 12:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.341 00:16:11.341 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.341 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.341 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.598 { 00:16:11.598 "cntlid": 69, 00:16:11.598 "qid": 0, 00:16:11.598 "state": "enabled", 00:16:11.598 "thread": "nvmf_tgt_poll_group_000", 00:16:11.598 "listen_address": { 00:16:11.598 "trtype": "TCP", 00:16:11.598 "adrfam": "IPv4", 00:16:11.598 "traddr": "10.0.0.2", 00:16:11.598 "trsvcid": "4420" 00:16:11.598 }, 00:16:11.598 "peer_address": { 00:16:11.598 "trtype": "TCP", 00:16:11.598 "adrfam": "IPv4", 00:16:11.598 "traddr": "10.0.0.1", 00:16:11.598 "trsvcid": "59858" 00:16:11.598 }, 00:16:11.598 "auth": { 00:16:11.598 "state": "completed", 00:16:11.598 "digest": "sha384", 00:16:11.598 "dhgroup": "ffdhe3072" 00:16:11.598 } 00:16:11.598 } 00:16:11.598 ]' 00:16:11.598 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.599 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.162 12:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.726 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.983 12:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.240 00:16:13.240 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.240 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.240 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.498 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.498 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.498 12:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.498 12:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.756 12:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.756 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.756 { 00:16:13.756 "cntlid": 71, 00:16:13.756 "qid": 0, 00:16:13.756 "state": "enabled", 00:16:13.756 "thread": "nvmf_tgt_poll_group_000", 00:16:13.756 "listen_address": { 00:16:13.756 "trtype": "TCP", 00:16:13.757 "adrfam": "IPv4", 00:16:13.757 "traddr": "10.0.0.2", 00:16:13.757 "trsvcid": "4420" 00:16:13.757 }, 00:16:13.757 "peer_address": { 00:16:13.757 "trtype": "TCP", 00:16:13.757 "adrfam": "IPv4", 00:16:13.757 "traddr": "10.0.0.1", 00:16:13.757 "trsvcid": "59888" 00:16:13.757 }, 00:16:13.757 "auth": { 00:16:13.757 "state": "completed", 00:16:13.757 "digest": "sha384", 00:16:13.757 "dhgroup": "ffdhe3072" 00:16:13.757 } 00:16:13.757 } 00:16:13.757 ]' 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.757 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.016 12:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.948 12:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.514 00:16:15.514 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.514 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.514 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.772 { 00:16:15.772 "cntlid": 73, 00:16:15.772 "qid": 0, 00:16:15.772 "state": "enabled", 00:16:15.772 "thread": "nvmf_tgt_poll_group_000", 00:16:15.772 "listen_address": { 00:16:15.772 "trtype": "TCP", 00:16:15.772 "adrfam": "IPv4", 00:16:15.772 "traddr": "10.0.0.2", 00:16:15.772 "trsvcid": "4420" 00:16:15.772 }, 00:16:15.772 "peer_address": { 00:16:15.772 "trtype": "TCP", 00:16:15.772 "adrfam": "IPv4", 00:16:15.772 "traddr": "10.0.0.1", 00:16:15.772 "trsvcid": "50742" 00:16:15.772 }, 00:16:15.772 "auth": { 00:16:15.772 "state": "completed", 00:16:15.772 "digest": "sha384", 00:16:15.772 "dhgroup": "ffdhe4096" 00:16:15.772 } 00:16:15.772 } 00:16:15.772 ]' 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.772 12:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.030 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:16.596 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.854 12:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.418 00:16:17.418 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.418 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.418 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.674 { 00:16:17.674 "cntlid": 75, 00:16:17.674 "qid": 0, 00:16:17.674 "state": "enabled", 00:16:17.674 "thread": "nvmf_tgt_poll_group_000", 00:16:17.674 "listen_address": { 00:16:17.674 "trtype": "TCP", 00:16:17.674 "adrfam": "IPv4", 00:16:17.674 "traddr": "10.0.0.2", 00:16:17.674 "trsvcid": "4420" 00:16:17.674 }, 00:16:17.674 "peer_address": { 00:16:17.674 "trtype": "TCP", 00:16:17.674 "adrfam": "IPv4", 00:16:17.674 "traddr": "10.0.0.1", 00:16:17.674 "trsvcid": "50766" 00:16:17.674 }, 00:16:17.674 "auth": { 00:16:17.674 "state": "completed", 00:16:17.674 "digest": "sha384", 00:16:17.674 "dhgroup": "ffdhe4096" 00:16:17.674 } 00:16:17.674 } 00:16:17.674 ]' 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.674 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.931 12:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.864 12:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.429 00:16:19.429 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.429 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.429 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.687 { 00:16:19.687 "cntlid": 77, 00:16:19.687 "qid": 0, 00:16:19.687 "state": "enabled", 00:16:19.687 "thread": "nvmf_tgt_poll_group_000", 00:16:19.687 "listen_address": { 00:16:19.687 "trtype": "TCP", 00:16:19.687 "adrfam": "IPv4", 00:16:19.687 "traddr": "10.0.0.2", 00:16:19.687 "trsvcid": "4420" 00:16:19.687 }, 00:16:19.687 "peer_address": { 00:16:19.687 "trtype": "TCP", 00:16:19.687 "adrfam": "IPv4", 00:16:19.687 "traddr": "10.0.0.1", 00:16:19.687 "trsvcid": "50792" 00:16:19.687 }, 00:16:19.687 "auth": { 00:16:19.687 "state": "completed", 00:16:19.687 "digest": "sha384", 00:16:19.687 "dhgroup": "ffdhe4096" 00:16:19.687 } 00:16:19.687 } 00:16:19.687 ]' 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.687 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.944 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.944 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.944 12:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.201 12:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:20.768 12:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.769 12:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.027 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.592 00:16:21.592 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.592 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.592 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.850 { 00:16:21.850 "cntlid": 79, 00:16:21.850 "qid": 0, 00:16:21.850 "state": "enabled", 00:16:21.850 "thread": "nvmf_tgt_poll_group_000", 00:16:21.850 "listen_address": { 00:16:21.850 "trtype": "TCP", 00:16:21.850 "adrfam": "IPv4", 00:16:21.850 "traddr": "10.0.0.2", 00:16:21.850 "trsvcid": "4420" 00:16:21.850 }, 00:16:21.850 "peer_address": { 00:16:21.850 "trtype": "TCP", 00:16:21.850 "adrfam": "IPv4", 00:16:21.850 "traddr": "10.0.0.1", 00:16:21.850 "trsvcid": "50812" 00:16:21.850 }, 00:16:21.850 "auth": { 00:16:21.850 "state": "completed", 00:16:21.850 "digest": "sha384", 00:16:21.850 "dhgroup": "ffdhe4096" 00:16:21.850 } 00:16:21.850 } 00:16:21.850 ]' 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.850 12:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.415 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.980 12:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.334 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.905 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.905 12:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.163 12:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.163 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.163 { 00:16:24.163 "cntlid": 81, 00:16:24.163 "qid": 0, 00:16:24.163 "state": "enabled", 00:16:24.163 "thread": "nvmf_tgt_poll_group_000", 00:16:24.163 "listen_address": { 00:16:24.163 "trtype": "TCP", 00:16:24.163 "adrfam": "IPv4", 00:16:24.163 "traddr": "10.0.0.2", 00:16:24.163 "trsvcid": "4420" 00:16:24.163 }, 00:16:24.163 "peer_address": { 00:16:24.163 "trtype": "TCP", 00:16:24.163 "adrfam": "IPv4", 00:16:24.163 "traddr": "10.0.0.1", 00:16:24.163 "trsvcid": "50842" 00:16:24.163 }, 00:16:24.163 "auth": { 00:16:24.163 "state": "completed", 00:16:24.163 "digest": "sha384", 00:16:24.163 "dhgroup": "ffdhe6144" 00:16:24.163 } 00:16:24.163 } 00:16:24.163 ]' 00:16:24.163 12:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.163 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.419 12:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.352 12:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.353 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.353 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 00:16:25.919 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.919 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.919 12:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.178 { 00:16:26.178 "cntlid": 83, 00:16:26.178 "qid": 0, 00:16:26.178 "state": "enabled", 00:16:26.178 "thread": "nvmf_tgt_poll_group_000", 00:16:26.178 "listen_address": { 00:16:26.178 "trtype": "TCP", 00:16:26.178 "adrfam": "IPv4", 00:16:26.178 "traddr": "10.0.0.2", 00:16:26.178 "trsvcid": "4420" 00:16:26.178 }, 00:16:26.178 "peer_address": { 00:16:26.178 "trtype": "TCP", 00:16:26.178 "adrfam": "IPv4", 00:16:26.178 "traddr": "10.0.0.1", 00:16:26.178 "trsvcid": "41316" 00:16:26.178 }, 00:16:26.178 "auth": { 00:16:26.178 "state": "completed", 00:16:26.178 "digest": "sha384", 00:16:26.178 "dhgroup": "ffdhe6144" 00:16:26.178 } 00:16:26.178 } 00:16:26.178 ]' 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.178 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.499 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.499 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.499 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.499 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.499 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.758 12:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:27.324 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.325 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.583 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.148 00:16:28.148 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.148 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.148 12:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.148 { 00:16:28.148 "cntlid": 85, 00:16:28.148 "qid": 0, 00:16:28.148 "state": "enabled", 00:16:28.148 "thread": "nvmf_tgt_poll_group_000", 00:16:28.148 "listen_address": { 00:16:28.148 "trtype": "TCP", 00:16:28.148 "adrfam": "IPv4", 00:16:28.148 "traddr": "10.0.0.2", 00:16:28.148 "trsvcid": "4420" 00:16:28.148 }, 00:16:28.148 "peer_address": { 00:16:28.148 "trtype": "TCP", 00:16:28.148 "adrfam": "IPv4", 00:16:28.148 "traddr": "10.0.0.1", 00:16:28.148 "trsvcid": "41332" 00:16:28.148 }, 00:16:28.148 "auth": { 00:16:28.148 "state": "completed", 00:16:28.148 "digest": "sha384", 00:16:28.148 "dhgroup": "ffdhe6144" 00:16:28.148 } 00:16:28.148 } 00:16:28.148 ]' 00:16:28.148 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.406 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.664 12:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.598 12:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.164 00:16:30.164 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.164 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.164 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.423 { 00:16:30.423 "cntlid": 87, 00:16:30.423 "qid": 0, 00:16:30.423 "state": "enabled", 00:16:30.423 "thread": "nvmf_tgt_poll_group_000", 00:16:30.423 "listen_address": { 00:16:30.423 "trtype": "TCP", 00:16:30.423 "adrfam": "IPv4", 00:16:30.423 "traddr": "10.0.0.2", 00:16:30.423 "trsvcid": "4420" 00:16:30.423 }, 00:16:30.423 "peer_address": { 00:16:30.423 "trtype": "TCP", 00:16:30.423 "adrfam": "IPv4", 00:16:30.423 "traddr": "10.0.0.1", 00:16:30.423 "trsvcid": "41368" 00:16:30.423 }, 00:16:30.423 "auth": { 00:16:30.423 "state": "completed", 00:16:30.423 "digest": "sha384", 00:16:30.423 "dhgroup": "ffdhe6144" 00:16:30.423 } 00:16:30.423 } 00:16:30.423 ]' 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.423 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.682 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.682 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.682 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.940 12:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.509 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.767 12:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.334 00:16:32.334 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.334 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.334 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.593 { 00:16:32.593 "cntlid": 89, 00:16:32.593 "qid": 0, 00:16:32.593 "state": "enabled", 00:16:32.593 "thread": "nvmf_tgt_poll_group_000", 00:16:32.593 "listen_address": { 00:16:32.593 "trtype": "TCP", 00:16:32.593 "adrfam": "IPv4", 00:16:32.593 "traddr": "10.0.0.2", 00:16:32.593 "trsvcid": "4420" 00:16:32.593 }, 00:16:32.593 "peer_address": { 00:16:32.593 "trtype": "TCP", 00:16:32.593 "adrfam": "IPv4", 00:16:32.593 "traddr": "10.0.0.1", 00:16:32.593 "trsvcid": "41396" 00:16:32.593 }, 00:16:32.593 "auth": { 00:16:32.593 "state": "completed", 00:16:32.593 "digest": "sha384", 00:16:32.593 "dhgroup": "ffdhe8192" 00:16:32.593 } 00:16:32.593 } 00:16:32.593 ]' 00:16:32.593 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.851 12:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.109 12:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.043 12:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.301 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.302 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.868 00:16:34.868 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.868 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.868 12:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.126 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.127 { 00:16:35.127 "cntlid": 91, 00:16:35.127 "qid": 0, 00:16:35.127 "state": "enabled", 00:16:35.127 "thread": "nvmf_tgt_poll_group_000", 00:16:35.127 "listen_address": { 00:16:35.127 "trtype": "TCP", 00:16:35.127 "adrfam": "IPv4", 00:16:35.127 "traddr": "10.0.0.2", 00:16:35.127 "trsvcid": "4420" 00:16:35.127 }, 00:16:35.127 "peer_address": { 00:16:35.127 "trtype": "TCP", 00:16:35.127 "adrfam": "IPv4", 00:16:35.127 "traddr": "10.0.0.1", 00:16:35.127 "trsvcid": "41428" 00:16:35.127 }, 00:16:35.127 "auth": { 00:16:35.127 "state": "completed", 00:16:35.127 "digest": "sha384", 00:16:35.127 "dhgroup": "ffdhe8192" 00:16:35.127 } 00:16:35.127 } 00:16:35.127 ]' 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.127 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.385 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.385 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.385 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.385 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.385 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.644 12:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.211 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.469 12:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.036 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.295 12:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.553 { 00:16:37.553 "cntlid": 93, 00:16:37.553 "qid": 0, 00:16:37.553 "state": "enabled", 00:16:37.553 "thread": "nvmf_tgt_poll_group_000", 00:16:37.553 "listen_address": { 00:16:37.553 "trtype": "TCP", 00:16:37.553 "adrfam": "IPv4", 00:16:37.553 "traddr": "10.0.0.2", 00:16:37.553 "trsvcid": "4420" 00:16:37.553 }, 00:16:37.553 "peer_address": { 00:16:37.553 "trtype": "TCP", 00:16:37.553 "adrfam": "IPv4", 00:16:37.553 "traddr": "10.0.0.1", 00:16:37.553 "trsvcid": "44626" 00:16:37.553 }, 00:16:37.553 "auth": { 00:16:37.553 "state": "completed", 00:16:37.553 "digest": "sha384", 00:16:37.553 "dhgroup": "ffdhe8192" 00:16:37.553 } 00:16:37.553 } 00:16:37.553 ]' 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.553 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.812 12:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.747 12:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.683 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.683 { 00:16:39.683 "cntlid": 95, 00:16:39.683 "qid": 0, 00:16:39.683 "state": "enabled", 00:16:39.683 "thread": "nvmf_tgt_poll_group_000", 00:16:39.683 "listen_address": { 00:16:39.683 "trtype": "TCP", 00:16:39.683 "adrfam": "IPv4", 00:16:39.683 "traddr": "10.0.0.2", 00:16:39.683 "trsvcid": "4420" 00:16:39.683 }, 00:16:39.683 "peer_address": { 00:16:39.683 "trtype": "TCP", 00:16:39.683 "adrfam": "IPv4", 00:16:39.683 "traddr": "10.0.0.1", 00:16:39.683 "trsvcid": "44646" 00:16:39.683 }, 00:16:39.683 "auth": { 00:16:39.683 "state": "completed", 00:16:39.683 "digest": "sha384", 00:16:39.683 "dhgroup": "ffdhe8192" 00:16:39.683 } 00:16:39.683 } 00:16:39.683 ]' 00:16:39.683 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.942 12:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.239 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.804 12:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.062 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:41.062 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.062 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.063 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:41.063 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.063 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.063 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.063 12:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.063 12:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.320 12:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.320 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.320 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.577 00:16:41.577 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.577 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.577 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.834 { 00:16:41.834 "cntlid": 97, 00:16:41.834 "qid": 0, 00:16:41.834 "state": "enabled", 00:16:41.834 "thread": "nvmf_tgt_poll_group_000", 00:16:41.834 "listen_address": { 00:16:41.834 "trtype": "TCP", 00:16:41.834 "adrfam": "IPv4", 00:16:41.834 "traddr": "10.0.0.2", 00:16:41.834 "trsvcid": "4420" 00:16:41.834 }, 00:16:41.834 "peer_address": { 00:16:41.834 "trtype": "TCP", 00:16:41.834 "adrfam": "IPv4", 00:16:41.834 "traddr": "10.0.0.1", 00:16:41.834 "trsvcid": "44666" 00:16:41.834 }, 00:16:41.834 "auth": { 00:16:41.834 "state": "completed", 00:16:41.834 "digest": "sha512", 00:16:41.834 "dhgroup": "null" 00:16:41.834 } 00:16:41.834 } 00:16:41.834 ]' 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.834 12:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.092 12:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.026 12:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.026 12:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.027 12:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.027 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.027 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.283 00:16:43.284 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.284 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.284 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.541 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.804 { 00:16:43.804 "cntlid": 99, 00:16:43.804 "qid": 0, 00:16:43.804 "state": "enabled", 00:16:43.804 "thread": "nvmf_tgt_poll_group_000", 00:16:43.804 "listen_address": { 00:16:43.804 "trtype": "TCP", 00:16:43.804 "adrfam": "IPv4", 00:16:43.804 "traddr": "10.0.0.2", 00:16:43.804 "trsvcid": "4420" 00:16:43.804 }, 00:16:43.804 "peer_address": { 00:16:43.804 "trtype": "TCP", 00:16:43.804 "adrfam": "IPv4", 00:16:43.804 "traddr": "10.0.0.1", 00:16:43.804 "trsvcid": "44682" 00:16:43.804 }, 00:16:43.804 "auth": { 00:16:43.804 "state": "completed", 00:16:43.804 "digest": "sha512", 00:16:43.804 "dhgroup": "null" 00:16:43.804 } 00:16:43.804 } 00:16:43.804 ]' 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.804 12:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.064 12:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:44.998 12:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.998 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.564 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.564 12:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.822 { 00:16:45.822 "cntlid": 101, 00:16:45.822 "qid": 0, 00:16:45.822 "state": "enabled", 00:16:45.822 "thread": "nvmf_tgt_poll_group_000", 00:16:45.822 "listen_address": { 00:16:45.822 "trtype": "TCP", 00:16:45.822 "adrfam": "IPv4", 00:16:45.822 "traddr": "10.0.0.2", 00:16:45.822 "trsvcid": "4420" 00:16:45.822 }, 00:16:45.822 "peer_address": { 00:16:45.822 "trtype": "TCP", 00:16:45.822 "adrfam": "IPv4", 00:16:45.822 "traddr": "10.0.0.1", 00:16:45.822 "trsvcid": "53642" 00:16:45.822 }, 00:16:45.822 "auth": { 00:16:45.822 "state": "completed", 00:16:45.822 "digest": "sha512", 00:16:45.822 "dhgroup": "null" 00:16:45.822 } 00:16:45.822 } 00:16:45.822 ]' 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.822 12:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.080 12:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.014 12:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.014 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.272 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.529 12:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.787 { 00:16:47.787 "cntlid": 103, 00:16:47.787 "qid": 0, 00:16:47.787 "state": "enabled", 00:16:47.787 "thread": "nvmf_tgt_poll_group_000", 00:16:47.787 "listen_address": { 00:16:47.787 "trtype": "TCP", 00:16:47.787 "adrfam": "IPv4", 00:16:47.787 "traddr": "10.0.0.2", 00:16:47.787 "trsvcid": "4420" 00:16:47.787 }, 00:16:47.787 "peer_address": { 00:16:47.787 "trtype": "TCP", 00:16:47.787 "adrfam": "IPv4", 00:16:47.787 "traddr": "10.0.0.1", 00:16:47.787 "trsvcid": "53652" 00:16:47.787 }, 00:16:47.787 "auth": { 00:16:47.787 "state": "completed", 00:16:47.787 "digest": "sha512", 00:16:47.787 "dhgroup": "null" 00:16:47.787 } 00:16:47.787 } 00:16:47.787 ]' 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.787 12:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.045 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:48.979 12:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.979 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.544 00:16:49.544 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.544 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.544 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.802 { 00:16:49.802 "cntlid": 105, 00:16:49.802 "qid": 0, 00:16:49.802 "state": "enabled", 00:16:49.802 "thread": "nvmf_tgt_poll_group_000", 00:16:49.802 "listen_address": { 00:16:49.802 "trtype": "TCP", 00:16:49.802 "adrfam": "IPv4", 00:16:49.802 "traddr": "10.0.0.2", 00:16:49.802 "trsvcid": "4420" 00:16:49.802 }, 00:16:49.802 "peer_address": { 00:16:49.802 "trtype": "TCP", 00:16:49.802 "adrfam": "IPv4", 00:16:49.802 "traddr": "10.0.0.1", 00:16:49.802 "trsvcid": "53680" 00:16:49.802 }, 00:16:49.802 "auth": { 00:16:49.802 "state": "completed", 00:16:49.802 "digest": "sha512", 00:16:49.802 "dhgroup": "ffdhe2048" 00:16:49.802 } 00:16:49.802 } 00:16:49.802 ]' 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.802 12:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.060 12:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:50.992 12:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.992 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.560 00:16:51.560 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.560 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.560 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.818 { 00:16:51.818 "cntlid": 107, 00:16:51.818 "qid": 0, 00:16:51.818 "state": "enabled", 00:16:51.818 "thread": "nvmf_tgt_poll_group_000", 00:16:51.818 "listen_address": { 00:16:51.818 "trtype": "TCP", 00:16:51.818 "adrfam": "IPv4", 00:16:51.818 "traddr": "10.0.0.2", 00:16:51.818 "trsvcid": "4420" 00:16:51.818 }, 00:16:51.818 "peer_address": { 00:16:51.818 "trtype": "TCP", 00:16:51.818 "adrfam": "IPv4", 00:16:51.818 "traddr": "10.0.0.1", 00:16:51.818 "trsvcid": "53716" 00:16:51.818 }, 00:16:51.818 "auth": { 00:16:51.818 "state": "completed", 00:16:51.818 "digest": "sha512", 00:16:51.818 "dhgroup": "ffdhe2048" 00:16:51.818 } 00:16:51.818 } 00:16:51.818 ]' 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.818 12:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.076 12:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.011 12:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.011 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.270 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.527 { 00:16:53.527 "cntlid": 109, 00:16:53.527 "qid": 0, 00:16:53.527 "state": "enabled", 00:16:53.527 "thread": "nvmf_tgt_poll_group_000", 00:16:53.527 "listen_address": { 00:16:53.527 "trtype": "TCP", 00:16:53.527 "adrfam": "IPv4", 00:16:53.527 "traddr": "10.0.0.2", 00:16:53.527 "trsvcid": "4420" 00:16:53.527 }, 00:16:53.527 "peer_address": { 00:16:53.527 "trtype": "TCP", 00:16:53.527 "adrfam": "IPv4", 00:16:53.527 "traddr": "10.0.0.1", 00:16:53.527 "trsvcid": "53734" 00:16:53.527 }, 00:16:53.527 "auth": { 00:16:53.527 "state": "completed", 00:16:53.527 "digest": "sha512", 00:16:53.527 "dhgroup": "ffdhe2048" 00:16:53.527 } 00:16:53.527 } 00:16:53.527 ]' 00:16:53.527 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.783 12:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.041 12:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.973 12:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.231 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.488 00:16:55.488 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.488 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.488 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.746 { 00:16:55.746 "cntlid": 111, 00:16:55.746 "qid": 0, 00:16:55.746 "state": "enabled", 00:16:55.746 "thread": "nvmf_tgt_poll_group_000", 00:16:55.746 "listen_address": { 00:16:55.746 "trtype": "TCP", 00:16:55.746 "adrfam": "IPv4", 00:16:55.746 "traddr": "10.0.0.2", 00:16:55.746 "trsvcid": "4420" 00:16:55.746 }, 00:16:55.746 "peer_address": { 00:16:55.746 "trtype": "TCP", 00:16:55.746 "adrfam": "IPv4", 00:16:55.746 "traddr": "10.0.0.1", 00:16:55.746 "trsvcid": "53294" 00:16:55.746 }, 00:16:55.746 "auth": { 00:16:55.746 "state": "completed", 00:16:55.746 "digest": "sha512", 00:16:55.746 "dhgroup": "ffdhe2048" 00:16:55.746 } 00:16:55.746 } 00:16:55.746 ]' 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.746 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.004 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.004 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.004 12:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.262 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.827 12:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.084 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.687 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.687 { 00:16:57.687 "cntlid": 113, 00:16:57.687 "qid": 0, 00:16:57.687 "state": "enabled", 00:16:57.687 "thread": "nvmf_tgt_poll_group_000", 00:16:57.687 "listen_address": { 00:16:57.687 "trtype": "TCP", 00:16:57.687 "adrfam": "IPv4", 00:16:57.687 "traddr": "10.0.0.2", 00:16:57.687 "trsvcid": "4420" 00:16:57.687 }, 00:16:57.687 "peer_address": { 00:16:57.687 "trtype": "TCP", 00:16:57.687 "adrfam": "IPv4", 00:16:57.687 "traddr": "10.0.0.1", 00:16:57.687 "trsvcid": "53326" 00:16:57.687 }, 00:16:57.687 "auth": { 00:16:57.687 "state": "completed", 00:16:57.687 "digest": "sha512", 00:16:57.687 "dhgroup": "ffdhe3072" 00:16:57.687 } 00:16:57.687 } 00:16:57.687 ]' 00:16:57.687 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.944 12:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.201 12:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.767 12:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.024 12:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.025 12:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.025 12:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.025 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.025 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.589 00:16:59.589 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.589 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.589 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.847 { 00:16:59.847 "cntlid": 115, 00:16:59.847 "qid": 0, 00:16:59.847 "state": "enabled", 00:16:59.847 "thread": "nvmf_tgt_poll_group_000", 00:16:59.847 "listen_address": { 00:16:59.847 "trtype": "TCP", 00:16:59.847 "adrfam": "IPv4", 00:16:59.847 "traddr": "10.0.0.2", 00:16:59.847 "trsvcid": "4420" 00:16:59.847 }, 00:16:59.847 "peer_address": { 00:16:59.847 "trtype": "TCP", 00:16:59.847 "adrfam": "IPv4", 00:16:59.847 "traddr": "10.0.0.1", 00:16:59.847 "trsvcid": "53350" 00:16:59.847 }, 00:16:59.847 "auth": { 00:16:59.847 "state": "completed", 00:16:59.847 "digest": "sha512", 00:16:59.847 "dhgroup": "ffdhe3072" 00:16:59.847 } 00:16:59.847 } 00:16:59.847 ]' 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.847 12:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.105 12:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.039 12:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.318 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.577 00:17:01.577 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.577 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.577 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.835 { 00:17:01.835 "cntlid": 117, 00:17:01.835 "qid": 0, 00:17:01.835 "state": "enabled", 00:17:01.835 "thread": "nvmf_tgt_poll_group_000", 00:17:01.835 "listen_address": { 00:17:01.835 "trtype": "TCP", 00:17:01.835 "adrfam": "IPv4", 00:17:01.835 "traddr": "10.0.0.2", 00:17:01.835 "trsvcid": "4420" 00:17:01.835 }, 00:17:01.835 "peer_address": { 00:17:01.835 "trtype": "TCP", 00:17:01.835 "adrfam": "IPv4", 00:17:01.835 "traddr": "10.0.0.1", 00:17:01.835 "trsvcid": "53384" 00:17:01.835 }, 00:17:01.835 "auth": { 00:17:01.835 "state": "completed", 00:17:01.835 "digest": "sha512", 00:17:01.835 "dhgroup": "ffdhe3072" 00:17:01.835 } 00:17:01.835 } 00:17:01.835 ]' 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.835 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.093 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.093 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.093 12:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.351 12:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.918 12:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.176 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.743 00:17:03.743 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.743 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.743 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.001 { 00:17:04.001 "cntlid": 119, 00:17:04.001 "qid": 0, 00:17:04.001 "state": "enabled", 00:17:04.001 "thread": "nvmf_tgt_poll_group_000", 00:17:04.001 "listen_address": { 00:17:04.001 "trtype": "TCP", 00:17:04.001 "adrfam": "IPv4", 00:17:04.001 "traddr": "10.0.0.2", 00:17:04.001 "trsvcid": "4420" 00:17:04.001 }, 00:17:04.001 "peer_address": { 00:17:04.001 "trtype": "TCP", 00:17:04.001 "adrfam": "IPv4", 00:17:04.001 "traddr": "10.0.0.1", 00:17:04.001 "trsvcid": "53410" 00:17:04.001 }, 00:17:04.001 "auth": { 00:17:04.001 "state": "completed", 00:17:04.001 "digest": "sha512", 00:17:04.001 "dhgroup": "ffdhe3072" 00:17:04.001 } 00:17:04.001 } 00:17:04.001 ]' 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.001 12:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.001 12:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.001 12:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.304 12:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.304 12:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.304 12:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.304 12:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.238 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.496 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.754 00:17:05.754 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.754 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.754 12:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.012 { 00:17:06.012 "cntlid": 121, 00:17:06.012 "qid": 0, 00:17:06.012 "state": "enabled", 00:17:06.012 "thread": "nvmf_tgt_poll_group_000", 00:17:06.012 "listen_address": { 00:17:06.012 "trtype": "TCP", 00:17:06.012 "adrfam": "IPv4", 00:17:06.012 "traddr": "10.0.0.2", 00:17:06.012 "trsvcid": "4420" 00:17:06.012 }, 00:17:06.012 "peer_address": { 00:17:06.012 "trtype": "TCP", 00:17:06.012 "adrfam": "IPv4", 00:17:06.012 "traddr": "10.0.0.1", 00:17:06.012 "trsvcid": "59438" 00:17:06.012 }, 00:17:06.012 "auth": { 00:17:06.012 "state": "completed", 00:17:06.012 "digest": "sha512", 00:17:06.012 "dhgroup": "ffdhe4096" 00:17:06.012 } 00:17:06.012 } 00:17:06.012 ]' 00:17:06.012 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.271 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.530 12:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:17:07.096 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.096 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:07.097 12:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.097 12:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.097 12:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.097 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.097 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.097 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.663 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.921 00:17:07.921 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.921 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.921 12:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.177 { 00:17:08.177 "cntlid": 123, 00:17:08.177 "qid": 0, 00:17:08.177 "state": "enabled", 00:17:08.177 "thread": "nvmf_tgt_poll_group_000", 00:17:08.177 "listen_address": { 00:17:08.177 "trtype": "TCP", 00:17:08.177 "adrfam": "IPv4", 00:17:08.177 "traddr": "10.0.0.2", 00:17:08.177 "trsvcid": "4420" 00:17:08.177 }, 00:17:08.177 "peer_address": { 00:17:08.177 "trtype": "TCP", 00:17:08.177 "adrfam": "IPv4", 00:17:08.177 "traddr": "10.0.0.1", 00:17:08.177 "trsvcid": "59458" 00:17:08.177 }, 00:17:08.177 "auth": { 00:17:08.177 "state": "completed", 00:17:08.177 "digest": "sha512", 00:17:08.177 "dhgroup": "ffdhe4096" 00:17:08.177 } 00:17:08.177 } 00:17:08.177 ]' 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.177 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.434 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.434 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.434 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.691 12:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.258 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.517 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.085 00:17:10.085 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.085 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.085 12:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.344 { 00:17:10.344 "cntlid": 125, 00:17:10.344 "qid": 0, 00:17:10.344 "state": "enabled", 00:17:10.344 "thread": "nvmf_tgt_poll_group_000", 00:17:10.344 "listen_address": { 00:17:10.344 "trtype": "TCP", 00:17:10.344 "adrfam": "IPv4", 00:17:10.344 "traddr": "10.0.0.2", 00:17:10.344 "trsvcid": "4420" 00:17:10.344 }, 00:17:10.344 "peer_address": { 00:17:10.344 "trtype": "TCP", 00:17:10.344 "adrfam": "IPv4", 00:17:10.344 "traddr": "10.0.0.1", 00:17:10.344 "trsvcid": "59486" 00:17:10.344 }, 00:17:10.344 "auth": { 00:17:10.344 "state": "completed", 00:17:10.344 "digest": "sha512", 00:17:10.344 "dhgroup": "ffdhe4096" 00:17:10.344 } 00:17:10.344 } 00:17:10.344 ]' 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.344 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.601 12:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.570 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.827 12:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.083 00:17:12.083 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.083 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.083 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.339 { 00:17:12.339 "cntlid": 127, 00:17:12.339 "qid": 0, 00:17:12.339 "state": "enabled", 00:17:12.339 "thread": "nvmf_tgt_poll_group_000", 00:17:12.339 "listen_address": { 00:17:12.339 "trtype": "TCP", 00:17:12.339 "adrfam": "IPv4", 00:17:12.339 "traddr": "10.0.0.2", 00:17:12.339 "trsvcid": "4420" 00:17:12.339 }, 00:17:12.339 "peer_address": { 00:17:12.339 "trtype": "TCP", 00:17:12.339 "adrfam": "IPv4", 00:17:12.339 "traddr": "10.0.0.1", 00:17:12.339 "trsvcid": "59500" 00:17:12.339 }, 00:17:12.339 "auth": { 00:17:12.339 "state": "completed", 00:17:12.339 "digest": "sha512", 00:17:12.339 "dhgroup": "ffdhe4096" 00:17:12.339 } 00:17:12.339 } 00:17:12.339 ]' 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.339 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.618 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.618 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.618 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.618 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.618 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.874 12:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:17:13.436 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.436 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:13.436 12:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.436 12:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.437 12:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.437 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.437 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.437 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.437 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:13.694 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:13.694 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 12:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.952 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.952 12:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.209 00:17:14.209 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.209 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.209 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.466 { 00:17:14.466 "cntlid": 129, 00:17:14.466 "qid": 0, 00:17:14.466 "state": "enabled", 00:17:14.466 "thread": "nvmf_tgt_poll_group_000", 00:17:14.466 "listen_address": { 00:17:14.466 "trtype": "TCP", 00:17:14.466 "adrfam": "IPv4", 00:17:14.466 "traddr": "10.0.0.2", 00:17:14.466 "trsvcid": "4420" 00:17:14.466 }, 00:17:14.466 "peer_address": { 00:17:14.466 "trtype": "TCP", 00:17:14.466 "adrfam": "IPv4", 00:17:14.466 "traddr": "10.0.0.1", 00:17:14.466 "trsvcid": "59520" 00:17:14.466 }, 00:17:14.466 "auth": { 00:17:14.466 "state": "completed", 00:17:14.466 "digest": "sha512", 00:17:14.466 "dhgroup": "ffdhe6144" 00:17:14.466 } 00:17:14.466 } 00:17:14.466 ]' 00:17:14.466 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.723 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.982 12:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.547 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.805 12:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.371 00:17:16.371 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.371 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.371 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.629 { 00:17:16.629 "cntlid": 131, 00:17:16.629 "qid": 0, 00:17:16.629 "state": "enabled", 00:17:16.629 "thread": "nvmf_tgt_poll_group_000", 00:17:16.629 "listen_address": { 00:17:16.629 "trtype": "TCP", 00:17:16.629 "adrfam": "IPv4", 00:17:16.629 "traddr": "10.0.0.2", 00:17:16.629 "trsvcid": "4420" 00:17:16.629 }, 00:17:16.629 "peer_address": { 00:17:16.629 "trtype": "TCP", 00:17:16.629 "adrfam": "IPv4", 00:17:16.629 "traddr": "10.0.0.1", 00:17:16.629 "trsvcid": "55132" 00:17:16.629 }, 00:17:16.629 "auth": { 00:17:16.629 "state": "completed", 00:17:16.629 "digest": "sha512", 00:17:16.629 "dhgroup": "ffdhe6144" 00:17:16.629 } 00:17:16.629 } 00:17:16.629 ]' 00:17:16.629 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.886 12:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.143 12:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.751 12:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.009 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.574 00:17:18.574 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.574 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.574 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.831 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.831 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.831 12:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.831 12:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.831 12:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.831 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.831 { 00:17:18.831 "cntlid": 133, 00:17:18.831 "qid": 0, 00:17:18.831 "state": "enabled", 00:17:18.831 "thread": "nvmf_tgt_poll_group_000", 00:17:18.831 "listen_address": { 00:17:18.831 "trtype": "TCP", 00:17:18.832 "adrfam": "IPv4", 00:17:18.832 "traddr": "10.0.0.2", 00:17:18.832 "trsvcid": "4420" 00:17:18.832 }, 00:17:18.832 "peer_address": { 00:17:18.832 "trtype": "TCP", 00:17:18.832 "adrfam": "IPv4", 00:17:18.832 "traddr": "10.0.0.1", 00:17:18.832 "trsvcid": "55164" 00:17:18.832 }, 00:17:18.832 "auth": { 00:17:18.832 "state": "completed", 00:17:18.832 "digest": "sha512", 00:17:18.832 "dhgroup": "ffdhe6144" 00:17:18.832 } 00:17:18.832 } 00:17:18.832 ]' 00:17:18.832 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.832 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.832 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.089 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.089 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.089 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.089 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.089 12:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.347 12:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.914 12:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.172 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.738 00:17:20.738 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.738 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.738 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.996 { 00:17:20.996 "cntlid": 135, 00:17:20.996 "qid": 0, 00:17:20.996 "state": "enabled", 00:17:20.996 "thread": "nvmf_tgt_poll_group_000", 00:17:20.996 "listen_address": { 00:17:20.996 "trtype": "TCP", 00:17:20.996 "adrfam": "IPv4", 00:17:20.996 "traddr": "10.0.0.2", 00:17:20.996 "trsvcid": "4420" 00:17:20.996 }, 00:17:20.996 "peer_address": { 00:17:20.996 "trtype": "TCP", 00:17:20.996 "adrfam": "IPv4", 00:17:20.996 "traddr": "10.0.0.1", 00:17:20.996 "trsvcid": "55190" 00:17:20.996 }, 00:17:20.996 "auth": { 00:17:20.996 "state": "completed", 00:17:20.996 "digest": "sha512", 00:17:20.996 "dhgroup": "ffdhe6144" 00:17:20.996 } 00:17:20.996 } 00:17:20.996 ]' 00:17:20.996 12:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.996 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.996 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.996 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.996 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.253 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.253 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.254 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.532 12:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.108 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.366 12:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.301 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.302 { 00:17:23.302 "cntlid": 137, 00:17:23.302 "qid": 0, 00:17:23.302 "state": "enabled", 00:17:23.302 "thread": "nvmf_tgt_poll_group_000", 00:17:23.302 "listen_address": { 00:17:23.302 "trtype": "TCP", 00:17:23.302 "adrfam": "IPv4", 00:17:23.302 "traddr": "10.0.0.2", 00:17:23.302 "trsvcid": "4420" 00:17:23.302 }, 00:17:23.302 "peer_address": { 00:17:23.302 "trtype": "TCP", 00:17:23.302 "adrfam": "IPv4", 00:17:23.302 "traddr": "10.0.0.1", 00:17:23.302 "trsvcid": "55220" 00:17:23.302 }, 00:17:23.302 "auth": { 00:17:23.302 "state": "completed", 00:17:23.302 "digest": "sha512", 00:17:23.302 "dhgroup": "ffdhe8192" 00:17:23.302 } 00:17:23.302 } 00:17:23.302 ]' 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.302 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.560 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.560 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.560 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.560 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.560 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.560 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.818 12:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.750 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.751 12:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.685 00:17:25.685 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.685 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.685 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.943 { 00:17:25.943 "cntlid": 139, 00:17:25.943 "qid": 0, 00:17:25.943 "state": "enabled", 00:17:25.943 "thread": "nvmf_tgt_poll_group_000", 00:17:25.943 "listen_address": { 00:17:25.943 "trtype": "TCP", 00:17:25.943 "adrfam": "IPv4", 00:17:25.943 "traddr": "10.0.0.2", 00:17:25.943 "trsvcid": "4420" 00:17:25.943 }, 00:17:25.943 "peer_address": { 00:17:25.943 "trtype": "TCP", 00:17:25.943 "adrfam": "IPv4", 00:17:25.943 "traddr": "10.0.0.1", 00:17:25.943 "trsvcid": "46518" 00:17:25.943 }, 00:17:25.943 "auth": { 00:17:25.943 "state": "completed", 00:17:25.943 "digest": "sha512", 00:17:25.943 "dhgroup": "ffdhe8192" 00:17:25.943 } 00:17:25.943 } 00:17:25.943 ]' 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.943 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.944 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.944 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.944 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.944 12:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.202 12:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:01:MGQxMjZiMWYyMWZkMzFmY2NiZmM3ZDQ5NjM3MzY3ZTCmNfkx: --dhchap-ctrl-secret DHHC-1:02:ZjZhODNhMzg1NWVjZjdmMjQ2ZmNiNGEwZGFjMjZhYzNhMWYyZDdmZThjNDQyZTZmmBpdxg==: 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.137 12:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.137 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.068 00:17:28.068 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.068 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.068 12:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.068 { 00:17:28.068 "cntlid": 141, 00:17:28.068 "qid": 0, 00:17:28.068 "state": "enabled", 00:17:28.068 "thread": "nvmf_tgt_poll_group_000", 00:17:28.068 "listen_address": { 00:17:28.068 "trtype": "TCP", 00:17:28.068 "adrfam": "IPv4", 00:17:28.068 "traddr": "10.0.0.2", 00:17:28.068 "trsvcid": "4420" 00:17:28.068 }, 00:17:28.068 "peer_address": { 00:17:28.068 "trtype": "TCP", 00:17:28.068 "adrfam": "IPv4", 00:17:28.068 "traddr": "10.0.0.1", 00:17:28.068 "trsvcid": "46536" 00:17:28.068 }, 00:17:28.068 "auth": { 00:17:28.068 "state": "completed", 00:17:28.068 "digest": "sha512", 00:17:28.068 "dhgroup": "ffdhe8192" 00:17:28.068 } 00:17:28.068 } 00:17:28.068 ]' 00:17:28.068 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.326 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.585 12:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:02:ODdhMTA3NjUwY2VlNzFlYmU4OGVlNGE3M2JkNWM2ZTZiZjZhNWVjNjUxMzU2NzM2INGM+Q==: --dhchap-ctrl-secret DHHC-1:01:M2M2NjRiZTFlODhiYWFmNjlkMmM5MDIwOGZlYWY2MjLVusUc: 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.521 12:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.088 00:17:30.088 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.088 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.088 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.346 { 00:17:30.346 "cntlid": 143, 00:17:30.346 "qid": 0, 00:17:30.346 "state": "enabled", 00:17:30.346 "thread": "nvmf_tgt_poll_group_000", 00:17:30.346 "listen_address": { 00:17:30.346 "trtype": "TCP", 00:17:30.346 "adrfam": "IPv4", 00:17:30.346 "traddr": "10.0.0.2", 00:17:30.346 "trsvcid": "4420" 00:17:30.346 }, 00:17:30.346 "peer_address": { 00:17:30.346 "trtype": "TCP", 00:17:30.346 "adrfam": "IPv4", 00:17:30.346 "traddr": "10.0.0.1", 00:17:30.346 "trsvcid": "46544" 00:17:30.346 }, 00:17:30.346 "auth": { 00:17:30.346 "state": "completed", 00:17:30.346 "digest": "sha512", 00:17:30.346 "dhgroup": "ffdhe8192" 00:17:30.346 } 00:17:30.346 } 00:17:30.346 ]' 00:17:30.346 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.603 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.861 12:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.795 12:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.778 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.778 { 00:17:32.778 "cntlid": 145, 00:17:32.778 "qid": 0, 00:17:32.778 "state": "enabled", 00:17:32.778 "thread": "nvmf_tgt_poll_group_000", 00:17:32.778 "listen_address": { 00:17:32.778 "trtype": "TCP", 00:17:32.778 "adrfam": "IPv4", 00:17:32.778 "traddr": "10.0.0.2", 00:17:32.778 "trsvcid": "4420" 00:17:32.778 }, 00:17:32.778 "peer_address": { 00:17:32.778 "trtype": "TCP", 00:17:32.778 "adrfam": "IPv4", 00:17:32.778 "traddr": "10.0.0.1", 00:17:32.778 "trsvcid": "46574" 00:17:32.778 }, 00:17:32.778 "auth": { 00:17:32.778 "state": "completed", 00:17:32.778 "digest": "sha512", 00:17:32.778 "dhgroup": "ffdhe8192" 00:17:32.778 } 00:17:32.778 } 00:17:32.778 ]' 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.778 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.035 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.035 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.035 12:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.292 12:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:00:ZTQwN2RjODMyNjRmMGZlYjY5YzE5YTk4MTBiYzkyZDA1N2JkNzllNjlhZjY1ZGMyQY6qow==: --dhchap-ctrl-secret DHHC-1:03:N2QzY2NiM2FlMzJhOTA2ZjhjZGRmNWZkNTJkMGU2MTJkNTllZGMwOTlhZjVkOTc4ZTYyMmRlOTk3NzA4YjY2N1DiI2Y=: 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:33.859 12:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:34.424 request: 00:17:34.424 { 00:17:34.424 "name": "nvme0", 00:17:34.424 "trtype": "tcp", 00:17:34.424 "traddr": "10.0.0.2", 00:17:34.424 "adrfam": "ipv4", 00:17:34.424 "trsvcid": "4420", 00:17:34.424 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93", 00:17:34.424 "prchk_reftag": false, 00:17:34.424 "prchk_guard": false, 00:17:34.424 "hdgst": false, 00:17:34.424 "ddgst": false, 00:17:34.424 "dhchap_key": "key2", 00:17:34.424 "method": "bdev_nvme_attach_controller", 00:17:34.424 "req_id": 1 00:17:34.424 } 00:17:34.424 Got JSON-RPC error response 00:17:34.424 response: 00:17:34.424 { 00:17:34.424 "code": -5, 00:17:34.424 "message": "Input/output error" 00:17:34.424 } 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.424 12:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.988 request: 00:17:34.988 { 00:17:34.988 "name": "nvme0", 00:17:34.988 "trtype": "tcp", 00:17:34.988 "traddr": "10.0.0.2", 00:17:34.988 "adrfam": "ipv4", 00:17:34.988 "trsvcid": "4420", 00:17:34.988 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:34.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93", 00:17:34.988 "prchk_reftag": false, 00:17:34.988 "prchk_guard": false, 00:17:34.988 "hdgst": false, 00:17:34.988 "ddgst": false, 00:17:34.988 "dhchap_key": "key1", 00:17:34.988 "dhchap_ctrlr_key": "ckey2", 00:17:34.988 "method": "bdev_nvme_attach_controller", 00:17:34.988 "req_id": 1 00:17:34.988 } 00:17:34.988 Got JSON-RPC error response 00:17:34.988 response: 00:17:34.988 { 00:17:34.988 "code": -5, 00:17:34.988 "message": "Input/output error" 00:17:34.988 } 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key1 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.988 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.553 request: 00:17:35.553 { 00:17:35.553 "name": "nvme0", 00:17:35.553 "trtype": "tcp", 00:17:35.553 "traddr": "10.0.0.2", 00:17:35.553 "adrfam": "ipv4", 00:17:35.553 "trsvcid": "4420", 00:17:35.553 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:35.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93", 00:17:35.553 "prchk_reftag": false, 00:17:35.553 "prchk_guard": false, 00:17:35.553 "hdgst": false, 00:17:35.553 "ddgst": false, 00:17:35.553 "dhchap_key": "key1", 00:17:35.553 "dhchap_ctrlr_key": "ckey1", 00:17:35.553 "method": "bdev_nvme_attach_controller", 00:17:35.553 "req_id": 1 00:17:35.553 } 00:17:35.553 Got JSON-RPC error response 00:17:35.553 response: 00:17:35.553 { 00:17:35.553 "code": -5, 00:17:35.553 "message": "Input/output error" 00:17:35.553 } 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 81400 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 81400 ']' 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 81400 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.553 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81400 00:17:35.811 killing process with pid 81400 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81400' 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 81400 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 81400 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=84433 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 84433 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 84433 ']' 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.811 12:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 84433 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 84433 ']' 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.199 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.200 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.200 12:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.200 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.200 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:37.200 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:37.200 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.200 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.457 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.457 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.458 12:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.023 00:17:38.023 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.023 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.023 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.281 { 00:17:38.281 "cntlid": 1, 00:17:38.281 "qid": 0, 00:17:38.281 "state": "enabled", 00:17:38.281 "thread": "nvmf_tgt_poll_group_000", 00:17:38.281 "listen_address": { 00:17:38.281 "trtype": "TCP", 00:17:38.281 "adrfam": "IPv4", 00:17:38.281 "traddr": "10.0.0.2", 00:17:38.281 "trsvcid": "4420" 00:17:38.281 }, 00:17:38.281 "peer_address": { 00:17:38.281 "trtype": "TCP", 00:17:38.281 "adrfam": "IPv4", 00:17:38.281 "traddr": "10.0.0.1", 00:17:38.281 "trsvcid": "50768" 00:17:38.281 }, 00:17:38.281 "auth": { 00:17:38.281 "state": "completed", 00:17:38.281 "digest": "sha512", 00:17:38.281 "dhgroup": "ffdhe8192" 00:17:38.281 } 00:17:38.281 } 00:17:38.281 ]' 00:17:38.281 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.539 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.797 12:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid 2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-secret DHHC-1:03:NmM4YjA1MmRkMzUyZDM5ZjM1OWNhZTBmMzk5Y2Y0MmM0MmU0MmQ2OWE5NWJlZjQ5YTE2ODFlZjdiMWY4MGNkOc8c0aU=: 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --dhchap-key key3 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:39.363 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.620 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.186 request: 00:17:40.186 { 00:17:40.186 "name": "nvme0", 00:17:40.186 "trtype": "tcp", 00:17:40.186 "traddr": "10.0.0.2", 00:17:40.186 "adrfam": "ipv4", 00:17:40.186 "trsvcid": "4420", 00:17:40.186 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93", 00:17:40.186 "prchk_reftag": false, 00:17:40.186 "prchk_guard": false, 00:17:40.186 "hdgst": false, 00:17:40.186 "ddgst": false, 00:17:40.186 "dhchap_key": "key3", 00:17:40.186 "method": "bdev_nvme_attach_controller", 00:17:40.186 "req_id": 1 00:17:40.186 } 00:17:40.186 Got JSON-RPC error response 00:17:40.186 response: 00:17:40.186 { 00:17:40.186 "code": -5, 00:17:40.186 "message": "Input/output error" 00:17:40.186 } 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:40.186 12:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:40.186 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.186 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.187 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.752 request: 00:17:40.752 { 00:17:40.752 "name": "nvme0", 00:17:40.752 "trtype": "tcp", 00:17:40.752 "traddr": "10.0.0.2", 00:17:40.752 "adrfam": "ipv4", 00:17:40.752 "trsvcid": "4420", 00:17:40.752 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93", 00:17:40.752 "prchk_reftag": false, 00:17:40.752 "prchk_guard": false, 00:17:40.752 "hdgst": false, 00:17:40.752 "ddgst": false, 00:17:40.752 "dhchap_key": "key3", 00:17:40.752 "method": "bdev_nvme_attach_controller", 00:17:40.752 "req_id": 1 00:17:40.752 } 00:17:40.752 Got JSON-RPC error response 00:17:40.752 response: 00:17:40.752 { 00:17:40.752 "code": -5, 00:17:40.752 "message": "Input/output error" 00:17:40.752 } 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:40.752 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.753 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:41.011 12:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:41.270 request: 00:17:41.270 { 00:17:41.270 "name": "nvme0", 00:17:41.270 "trtype": "tcp", 00:17:41.270 "traddr": "10.0.0.2", 00:17:41.270 "adrfam": "ipv4", 00:17:41.270 "trsvcid": "4420", 00:17:41.270 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:41.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93", 00:17:41.270 "prchk_reftag": false, 00:17:41.270 "prchk_guard": false, 00:17:41.270 "hdgst": false, 00:17:41.270 "ddgst": false, 00:17:41.270 "dhchap_key": "key0", 00:17:41.270 "dhchap_ctrlr_key": "key1", 00:17:41.270 "method": "bdev_nvme_attach_controller", 00:17:41.270 "req_id": 1 00:17:41.270 } 00:17:41.270 Got JSON-RPC error response 00:17:41.270 response: 00:17:41.270 { 00:17:41.270 "code": -5, 00:17:41.270 "message": "Input/output error" 00:17:41.270 } 00:17:41.270 12:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:41.270 12:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:41.270 12:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:41.270 12:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:41.270 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:41.270 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:41.527 00:17:41.527 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:41.527 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:41.527 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.785 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.785 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.785 12:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81432 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 81432 ']' 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 81432 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81432 00:17:42.043 killing process with pid 81432 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81432' 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 81432 00:17:42.043 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 81432 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.608 rmmod nvme_tcp 00:17:42.608 rmmod nvme_fabrics 00:17:42.608 rmmod nvme_keyring 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 84433 ']' 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 84433 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 84433 ']' 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 84433 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84433 00:17:42.608 killing process with pid 84433 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84433' 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 84433 00:17:42.608 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 84433 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yRU /tmp/spdk.key-sha256.KKb /tmp/spdk.key-sha384.N7L /tmp/spdk.key-sha512.Hs4 /tmp/spdk.key-sha512.1zH /tmp/spdk.key-sha384.z4O /tmp/spdk.key-sha256.vvf '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:42.865 00:17:42.865 real 2m49.129s 00:17:42.865 user 6m44.697s 00:17:42.865 sys 0m26.578s 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.865 ************************************ 00:17:42.865 END TEST nvmf_auth_target 00:17:42.865 ************************************ 00:17:42.865 12:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.865 12:29:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:42.865 12:29:11 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:42.865 12:29:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:42.865 12:29:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:42.865 12:29:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.865 12:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.865 ************************************ 00:17:42.865 START TEST nvmf_bdevio_no_huge 00:17:42.865 ************************************ 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:42.865 * Looking for test storage... 00:17:42.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.865 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:43.123 Cannot find device "nvmf_tgt_br" 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.123 Cannot find device "nvmf_tgt_br2" 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:43.123 Cannot find device "nvmf_tgt_br" 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:17:43.123 12:29:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:43.123 Cannot find device "nvmf_tgt_br2" 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:43.123 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:43.124 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.124 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.124 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.124 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:43.124 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:43.124 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:43.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:43.381 00:17:43.381 --- 10.0.0.2 ping statistics --- 00:17:43.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.381 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:43.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:43.381 00:17:43.381 --- 10.0.0.3 ping statistics --- 00:17:43.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.381 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:17:43.381 00:17:43.381 --- 10.0.0.1 ping statistics --- 00:17:43.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.381 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=84756 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 84756 00:17:43.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 84756 ']' 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.381 12:29:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:43.381 [2024-07-12 12:29:12.338446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:43.381 [2024-07-12 12:29:12.338840] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:43.639 [2024-07-12 12:29:12.484562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.639 [2024-07-12 12:29:12.580533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.639 [2024-07-12 12:29:12.580974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.639 [2024-07-12 12:29:12.581437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.639 [2024-07-12 12:29:12.581461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.639 [2024-07-12 12:29:12.581469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.639 [2024-07-12 12:29:12.581600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:43.639 [2024-07-12 12:29:12.581831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:43.639 [2024-07-12 12:29:12.582027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:43.640 [2024-07-12 12:29:12.582037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.640 [2024-07-12 12:29:12.586750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 [2024-07-12 12:29:13.328211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 Malloc0 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 [2024-07-12 12:29:13.368476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:44.572 { 00:17:44.572 "params": { 00:17:44.572 "name": "Nvme$subsystem", 00:17:44.572 "trtype": "$TEST_TRANSPORT", 00:17:44.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.572 "adrfam": "ipv4", 00:17:44.572 "trsvcid": "$NVMF_PORT", 00:17:44.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.572 "hdgst": ${hdgst:-false}, 00:17:44.572 "ddgst": ${ddgst:-false} 00:17:44.572 }, 00:17:44.572 "method": "bdev_nvme_attach_controller" 00:17:44.572 } 00:17:44.572 EOF 00:17:44.572 )") 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:44.572 12:29:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:44.572 "params": { 00:17:44.572 "name": "Nvme1", 00:17:44.572 "trtype": "tcp", 00:17:44.572 "traddr": "10.0.0.2", 00:17:44.572 "adrfam": "ipv4", 00:17:44.572 "trsvcid": "4420", 00:17:44.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.572 "hdgst": false, 00:17:44.572 "ddgst": false 00:17:44.572 }, 00:17:44.572 "method": "bdev_nvme_attach_controller" 00:17:44.572 }' 00:17:44.572 [2024-07-12 12:29:13.425584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:44.572 [2024-07-12 12:29:13.425696] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid84792 ] 00:17:44.572 [2024-07-12 12:29:13.566301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:44.829 [2024-07-12 12:29:13.682707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.829 [2024-07-12 12:29:13.682832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.829 [2024-07-12 12:29:13.682835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.829 [2024-07-12 12:29:13.697456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:44.829 I/O targets: 00:17:44.829 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:44.830 00:17:44.830 00:17:44.830 CUnit - A unit testing framework for C - Version 2.1-3 00:17:44.830 http://cunit.sourceforge.net/ 00:17:44.830 00:17:44.830 00:17:44.830 Suite: bdevio tests on: Nvme1n1 00:17:44.830 Test: blockdev write read block ...passed 00:17:44.830 Test: blockdev write zeroes read block ...passed 00:17:44.830 Test: blockdev write zeroes read no split ...passed 00:17:44.830 Test: blockdev write zeroes read split ...passed 00:17:44.830 Test: blockdev write zeroes read split partial ...passed 00:17:44.830 Test: blockdev reset ...[2024-07-12 12:29:13.899342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:44.830 [2024-07-12 12:29:13.899682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167fa80 (9): Bad file descriptor 00:17:44.830 [2024-07-12 12:29:13.910505] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:44.830 passed 00:17:45.087 Test: blockdev write read 8 blocks ...passed 00:17:45.087 Test: blockdev write read size > 128k ...passed 00:17:45.087 Test: blockdev write read invalid size ...passed 00:17:45.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:45.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:45.087 Test: blockdev write read max offset ...passed 00:17:45.087 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:45.087 Test: blockdev writev readv 8 blocks ...passed 00:17:45.087 Test: blockdev writev readv 30 x 1block ...passed 00:17:45.087 Test: blockdev writev readv block ...passed 00:17:45.087 Test: blockdev writev readv size > 128k ...passed 00:17:45.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:45.087 Test: blockdev comparev and writev ...[2024-07-12 12:29:13.919965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.920203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.920342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.920433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.920846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.921046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.921282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.921961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.922166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.922389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.922588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.923145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.923382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.923572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:45.087 [2024-07-12 12:29:13.923814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 passed 00:17:45.087 Test: blockdev nvme passthru rw ...cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:45.087 passed 00:17:45.087 Test: blockdev nvme passthru vendor specific ...[2024-07-12 12:29:13.924859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.087 [2024-07-12 12:29:13.925078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.925328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.087 [2024-07-12 12:29:13.925499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:45.087 [2024-07-12 12:29:13.925824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.087 [2024-07-12 12:29:13.926007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:45.087 passed 00:17:45.087 Test: blockdev nvme admin passthru ...[2024-07-12 12:29:13.926360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.087 [2024-07-12 12:29:13.926459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:45.087 passed 00:17:45.087 Test: blockdev copy ...passed 00:17:45.087 00:17:45.087 Run Summary: Type Total Ran Passed Failed Inactive 00:17:45.088 suites 1 1 n/a 0 0 00:17:45.088 tests 23 23 23 0 0 00:17:45.088 asserts 152 152 152 0 n/a 00:17:45.088 00:17:45.088 Elapsed time = 0.180 seconds 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.345 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.346 rmmod nvme_tcp 00:17:45.346 rmmod nvme_fabrics 00:17:45.346 rmmod nvme_keyring 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 84756 ']' 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 84756 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 84756 ']' 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 84756 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84756 00:17:45.346 killing process with pid 84756 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84756' 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 84756 00:17:45.346 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 84756 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:45.914 ************************************ 00:17:45.914 END TEST nvmf_bdevio_no_huge 00:17:45.914 ************************************ 00:17:45.914 00:17:45.914 real 0m2.975s 00:17:45.914 user 0m9.758s 00:17:45.914 sys 0m1.233s 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.914 12:29:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:45.914 12:29:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:45.914 12:29:14 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.914 12:29:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:45.914 12:29:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.914 12:29:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.914 ************************************ 00:17:45.914 START TEST nvmf_tls 00:17:45.914 ************************************ 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:45.914 * Looking for test storage... 00:17:45.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:45.914 12:29:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:46.174 Cannot find device "nvmf_tgt_br" 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.174 Cannot find device "nvmf_tgt_br2" 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.174 Cannot find device "nvmf_tgt_br" 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.174 Cannot find device "nvmf_tgt_br2" 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.174 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:46.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:46.437 00:17:46.437 --- 10.0.0.2 ping statistics --- 00:17:46.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.437 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:46.437 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.437 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:46.437 00:17:46.437 --- 10.0.0.3 ping statistics --- 00:17:46.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.437 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:46.437 00:17:46.437 --- 10.0.0.1 ping statistics --- 00:17:46.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.437 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84971 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84971 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:46.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84971 ']' 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.437 12:29:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.437 [2024-07-12 12:29:15.365817] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:46.437 [2024-07-12 12:29:15.366209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.437 [2024-07-12 12:29:15.510215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.700 [2024-07-12 12:29:15.607841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.700 [2024-07-12 12:29:15.608113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.700 [2024-07-12 12:29:15.608278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.700 [2024-07-12 12:29:15.608403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.700 [2024-07-12 12:29:15.608417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.700 [2024-07-12 12:29:15.608451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:47.634 true 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:47.634 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.893 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:47.893 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:47.893 12:29:16 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:48.152 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.152 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:48.410 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:48.410 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:48.410 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:48.668 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.668 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:48.926 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:48.926 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:48.926 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.926 12:29:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:49.247 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:49.247 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:49.247 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:49.552 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.552 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:49.810 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:49.810 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:49.810 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:50.067 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.067 12:29:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.hz773JhsDZ 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.epPsGXXWeE 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hz773JhsDZ 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.epPsGXXWeE 00:17:50.325 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.583 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:50.840 [2024-07-12 12:29:19.794514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.840 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hz773JhsDZ 00:17:50.840 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hz773JhsDZ 00:17:50.840 12:29:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.097 [2024-07-12 12:29:20.100697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.097 12:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.356 12:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.614 [2024-07-12 12:29:20.612868] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.614 [2024-07-12 12:29:20.613115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.614 12:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:52.179 malloc0 00:17:52.179 12:29:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.179 12:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hz773JhsDZ 00:17:52.436 [2024-07-12 12:29:21.460792] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:52.436 12:29:21 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hz773JhsDZ 00:18:04.711 Initializing NVMe Controllers 00:18:04.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.711 Initialization complete. Launching workers. 00:18:04.711 ======================================================== 00:18:04.711 Latency(us) 00:18:04.711 Device Information : IOPS MiB/s Average min max 00:18:04.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9589.47 37.46 6676.26 1501.31 9141.98 00:18:04.711 ======================================================== 00:18:04.711 Total : 9589.47 37.46 6676.26 1501.31 9141.98 00:18:04.711 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hz773JhsDZ 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hz773JhsDZ' 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85208 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85208 /var/tmp/bdevperf.sock 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85208 ']' 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.711 12:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.712 12:29:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.712 [2024-07-12 12:29:31.727933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:04.712 [2024-07-12 12:29:31.728261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85208 ] 00:18:04.712 [2024-07-12 12:29:31.863357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.712 [2024-07-12 12:29:31.972771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.712 [2024-07-12 12:29:32.026362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:04.712 12:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.712 12:29:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.712 12:29:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hz773JhsDZ 00:18:04.712 [2024-07-12 12:29:32.930518] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.712 [2024-07-12 12:29:32.930657] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:04.712 TLSTESTn1 00:18:04.712 12:29:33 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:04.712 Running I/O for 10 seconds... 00:18:14.775 00:18:14.775 Latency(us) 00:18:14.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:14.775 Verification LBA range: start 0x0 length 0x2000 00:18:14.775 TLSTESTn1 : 10.02 4066.69 15.89 0.00 0.00 31416.63 5510.98 24546.21 00:18:14.775 =================================================================================================================== 00:18:14.775 Total : 4066.69 15.89 0.00 0.00 31416.63 5510.98 24546.21 00:18:14.775 0 00:18:14.775 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85208 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85208 ']' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85208 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85208 00:18:14.776 killing process with pid 85208 00:18:14.776 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.776 00:18:14.776 Latency(us) 00:18:14.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.776 =================================================================================================================== 00:18:14.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85208' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85208 00:18:14.776 [2024-07-12 12:29:43.186403] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85208 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.epPsGXXWeE 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.epPsGXXWeE 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.epPsGXXWeE 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.epPsGXXWeE' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85336 00:18:14.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85336 /var/tmp/bdevperf.sock 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85336 ']' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.776 12:29:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.776 [2024-07-12 12:29:43.467957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:14.776 [2024-07-12 12:29:43.468037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85336 ] 00:18:14.776 [2024-07-12 12:29:43.604734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.776 [2024-07-12 12:29:43.697841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.776 [2024-07-12 12:29:43.751646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.epPsGXXWeE 00:18:15.711 [2024-07-12 12:29:44.743126] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.711 [2024-07-12 12:29:44.743438] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.711 [2024-07-12 12:29:44.754025] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-12 12:29:44.754076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209a2d0 (107): Transport endpoint is not connected 00:18:15.711 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:15.711 [2024-07-12 12:29:44.755067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209a2d0 (9): Bad file descriptor 00:18:15.711 [2024-07-12 12:29:44.756065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.711 [2024-07-12 12:29:44.756902] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:15.711 [2024-07-12 12:29:44.756947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.711 request: 00:18:15.711 { 00:18:15.711 "name": "TLSTEST", 00:18:15.711 "trtype": "tcp", 00:18:15.711 "traddr": "10.0.0.2", 00:18:15.711 "adrfam": "ipv4", 00:18:15.711 "trsvcid": "4420", 00:18:15.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.711 "prchk_reftag": false, 00:18:15.711 "prchk_guard": false, 00:18:15.711 "hdgst": false, 00:18:15.711 "ddgst": false, 00:18:15.711 "psk": "/tmp/tmp.epPsGXXWeE", 00:18:15.711 "method": "bdev_nvme_attach_controller", 00:18:15.711 "req_id": 1 00:18:15.711 } 00:18:15.711 Got JSON-RPC error response 00:18:15.711 response: 00:18:15.711 { 00:18:15.711 "code": -5, 00:18:15.711 "message": "Input/output error" 00:18:15.711 } 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85336 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85336 ']' 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85336 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.711 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85336 00:18:15.970 killing process with pid 85336 00:18:15.970 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.970 00:18:15.970 Latency(us) 00:18:15.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.970 =================================================================================================================== 00:18:15.970 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85336' 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85336 00:18:15.970 [2024-07-12 12:29:44.797319] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85336 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hz773JhsDZ 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hz773JhsDZ 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:15.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hz773JhsDZ 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hz773JhsDZ' 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85369 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85369 /var/tmp/bdevperf.sock 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85369 ']' 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.970 12:29:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.970 [2024-07-12 12:29:45.033319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:15.970 [2024-07-12 12:29:45.033604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85369 ] 00:18:16.228 [2024-07-12 12:29:45.167339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.228 [2024-07-12 12:29:45.243935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.228 [2024-07-12 12:29:45.298205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.163 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.163 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.163 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hz773JhsDZ 00:18:17.163 [2024-07-12 12:29:46.223732] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.163 [2024-07-12 12:29:46.224089] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.163 [2024-07-12 12:29:46.234077] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:17.163 [2024-07-12 12:29:46.234286] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:17.163 [2024-07-12 12:29:46.234462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.163 [2024-07-12 12:29:46.234716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24502d0 (107): Transport endpoint is not connected 00:18:17.163 [2024-07-12 12:29:46.235706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24502d0 (9): Bad file descriptor 00:18:17.163 [2024-07-12 12:29:46.236703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.163 [2024-07-12 12:29:46.236727] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.163 [2024-07-12 12:29:46.236740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.163 request: 00:18:17.163 { 00:18:17.163 "name": "TLSTEST", 00:18:17.163 "trtype": "tcp", 00:18:17.163 "traddr": "10.0.0.2", 00:18:17.163 "adrfam": "ipv4", 00:18:17.163 "trsvcid": "4420", 00:18:17.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.163 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:17.163 "prchk_reftag": false, 00:18:17.163 "prchk_guard": false, 00:18:17.163 "hdgst": false, 00:18:17.163 "ddgst": false, 00:18:17.163 "psk": "/tmp/tmp.hz773JhsDZ", 00:18:17.163 "method": "bdev_nvme_attach_controller", 00:18:17.163 "req_id": 1 00:18:17.163 } 00:18:17.163 Got JSON-RPC error response 00:18:17.163 response: 00:18:17.163 { 00:18:17.163 "code": -5, 00:18:17.163 "message": "Input/output error" 00:18:17.163 } 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85369 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85369 ']' 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85369 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85369 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:17.422 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85369' 00:18:17.423 killing process with pid 85369 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85369 00:18:17.423 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.423 00:18:17.423 Latency(us) 00:18:17.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.423 =================================================================================================================== 00:18:17.423 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85369 00:18:17.423 [2024-07-12 12:29:46.280079] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hz773JhsDZ 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hz773JhsDZ 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:17.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hz773JhsDZ 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hz773JhsDZ' 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85391 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85391 /var/tmp/bdevperf.sock 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85391 ']' 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.423 12:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.681 [2024-07-12 12:29:46.520839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:17.681 [2024-07-12 12:29:46.521945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85391 ] 00:18:17.681 [2024-07-12 12:29:46.656281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.681 [2024-07-12 12:29:46.733696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.939 [2024-07-12 12:29:46.787424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.507 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.507 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.507 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hz773JhsDZ 00:18:18.765 [2024-07-12 12:29:47.706393] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.765 [2024-07-12 12:29:47.706707] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.765 [2024-07-12 12:29:47.714246] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:18.765 [2024-07-12 12:29:47.714450] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:18.765 [2024-07-12 12:29:47.714626] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.765 [2024-07-12 12:29:47.715350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178d2d0 (107): Transport endpoint is not connected 00:18:18.765 [2024-07-12 12:29:47.716340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178d2d0 (9): Bad file descriptor 00:18:18.766 [2024-07-12 12:29:47.717338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:18.766 [2024-07-12 12:29:47.717498] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.766 [2024-07-12 12:29:47.717612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:18.766 request: 00:18:18.766 { 00:18:18.766 "name": "TLSTEST", 00:18:18.766 "trtype": "tcp", 00:18:18.766 "traddr": "10.0.0.2", 00:18:18.766 "adrfam": "ipv4", 00:18:18.766 "trsvcid": "4420", 00:18:18.766 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:18.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.766 "prchk_reftag": false, 00:18:18.766 "prchk_guard": false, 00:18:18.766 "hdgst": false, 00:18:18.766 "ddgst": false, 00:18:18.766 "psk": "/tmp/tmp.hz773JhsDZ", 00:18:18.766 "method": "bdev_nvme_attach_controller", 00:18:18.766 "req_id": 1 00:18:18.766 } 00:18:18.766 Got JSON-RPC error response 00:18:18.766 response: 00:18:18.766 { 00:18:18.766 "code": -5, 00:18:18.766 "message": "Input/output error" 00:18:18.766 } 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85391 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85391 ']' 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85391 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85391 00:18:18.766 killing process with pid 85391 00:18:18.766 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.766 00:18:18.766 Latency(us) 00:18:18.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.766 =================================================================================================================== 00:18:18.766 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85391' 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85391 00:18:18.766 [2024-07-12 12:29:47.757704] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:18.766 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85391 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:19.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85418 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85418 /var/tmp/bdevperf.sock 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85418 ']' 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.024 12:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.024 [2024-07-12 12:29:47.995213] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:19.024 [2024-07-12 12:29:47.995307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85418 ] 00:18:19.282 [2024-07-12 12:29:48.131598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.282 [2024-07-12 12:29:48.213251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.282 [2024-07-12 12:29:48.266481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.219 12:29:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.219 12:29:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.219 12:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:20.219 [2024-07-12 12:29:49.195541] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:20.219 [2024-07-12 12:29:49.197482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18686b0 (9): Bad file descriptor 00:18:20.219 [2024-07-12 12:29:49.198477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.219 [2024-07-12 12:29:49.198497] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:20.219 [2024-07-12 12:29:49.198511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.219 request: 00:18:20.219 { 00:18:20.219 "name": "TLSTEST", 00:18:20.219 "trtype": "tcp", 00:18:20.219 "traddr": "10.0.0.2", 00:18:20.219 "adrfam": "ipv4", 00:18:20.219 "trsvcid": "4420", 00:18:20.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.219 "prchk_reftag": false, 00:18:20.219 "prchk_guard": false, 00:18:20.219 "hdgst": false, 00:18:20.219 "ddgst": false, 00:18:20.219 "method": "bdev_nvme_attach_controller", 00:18:20.219 "req_id": 1 00:18:20.219 } 00:18:20.219 Got JSON-RPC error response 00:18:20.219 response: 00:18:20.219 { 00:18:20.219 "code": -5, 00:18:20.219 "message": "Input/output error" 00:18:20.219 } 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85418 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85418 ']' 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85418 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85418 00:18:20.219 killing process with pid 85418 00:18:20.219 Received shutdown signal, test time was about 10.000000 seconds 00:18:20.219 00:18:20.219 Latency(us) 00:18:20.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.219 =================================================================================================================== 00:18:20.219 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85418' 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85418 00:18:20.219 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85418 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 84971 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84971 ']' 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84971 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84971 00:18:20.477 killing process with pid 84971 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84971' 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84971 00:18:20.477 [2024-07-12 12:29:49.462406] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:20.477 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84971 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9Vy62Z5vs2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9Vy62Z5vs2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85456 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85456 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85456 ']' 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.736 12:29:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.736 [2024-07-12 12:29:49.811001] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:20.736 [2024-07-12 12:29:49.811130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.996 [2024-07-12 12:29:49.956730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.996 [2024-07-12 12:29:50.053575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.996 [2024-07-12 12:29:50.053632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.996 [2024-07-12 12:29:50.053645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.996 [2024-07-12 12:29:50.053653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.996 [2024-07-12 12:29:50.053661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.996 [2024-07-12 12:29:50.053691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.254 [2024-07-12 12:29:50.106994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9Vy62Z5vs2 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9Vy62Z5vs2 00:18:21.820 12:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.078 [2024-07-12 12:29:51.110011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.078 12:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:22.337 12:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:22.595 [2024-07-12 12:29:51.574101] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.595 [2024-07-12 12:29:51.574324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.595 12:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:22.853 malloc0 00:18:22.853 12:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.111 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:18:23.369 [2024-07-12 12:29:52.321312] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Vy62Z5vs2 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9Vy62Z5vs2' 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85511 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85511 /var/tmp/bdevperf.sock 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85511 ']' 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.369 12:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.369 [2024-07-12 12:29:52.380247] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:23.369 [2024-07-12 12:29:52.380480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85511 ] 00:18:23.627 [2024-07-12 12:29:52.518289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.627 [2024-07-12 12:29:52.616763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.627 [2024-07-12 12:29:52.673598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:24.561 12:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.561 12:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:24.561 12:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:18:24.561 [2024-07-12 12:29:53.492260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.561 [2024-07-12 12:29:53.492572] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:24.561 TLSTESTn1 00:18:24.561 12:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:24.820 Running I/O for 10 seconds... 00:18:34.788 00:18:34.788 Latency(us) 00:18:34.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:34.788 Verification LBA range: start 0x0 length 0x2000 00:18:34.788 TLSTESTn1 : 10.03 3910.71 15.28 0.00 0.00 32659.62 7298.33 35746.91 00:18:34.788 =================================================================================================================== 00:18:34.788 Total : 3910.71 15.28 0.00 0.00 32659.62 7298.33 35746.91 00:18:34.788 0 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85511 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85511 ']' 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85511 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85511 00:18:34.788 killing process with pid 85511 00:18:34.788 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.788 00:18:34.788 Latency(us) 00:18:34.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.788 =================================================================================================================== 00:18:34.788 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85511' 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85511 00:18:34.788 [2024-07-12 12:30:03.775272] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.788 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85511 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9Vy62Z5vs2 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Vy62Z5vs2 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Vy62Z5vs2 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:35.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9Vy62Z5vs2 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9Vy62Z5vs2' 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85640 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85640 /var/tmp/bdevperf.sock 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85640 ']' 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.046 12:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.046 [2024-07-12 12:30:04.033146] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:35.046 [2024-07-12 12:30:04.033238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85640 ] 00:18:35.304 [2024-07-12 12:30:04.164779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.304 [2024-07-12 12:30:04.257647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.304 [2024-07-12 12:30:04.311901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:36.237 12:30:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:18:36.237 [2024-07-12 12:30:05.247692] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.237 [2024-07-12 12:30:05.248021] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:36.237 [2024-07-12 12:30:05.248162] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9Vy62Z5vs2 00:18:36.237 request: 00:18:36.237 { 00:18:36.237 "name": "TLSTEST", 00:18:36.237 "trtype": "tcp", 00:18:36.237 "traddr": "10.0.0.2", 00:18:36.237 "adrfam": "ipv4", 00:18:36.237 "trsvcid": "4420", 00:18:36.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.237 "prchk_reftag": false, 00:18:36.237 "prchk_guard": false, 00:18:36.237 "hdgst": false, 00:18:36.237 "ddgst": false, 00:18:36.237 "psk": "/tmp/tmp.9Vy62Z5vs2", 00:18:36.237 "method": "bdev_nvme_attach_controller", 00:18:36.237 "req_id": 1 00:18:36.237 } 00:18:36.237 Got JSON-RPC error response 00:18:36.237 response: 00:18:36.237 { 00:18:36.237 "code": -1, 00:18:36.237 "message": "Operation not permitted" 00:18:36.237 } 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85640 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85640 ']' 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85640 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85640 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:36.237 killing process with pid 85640 00:18:36.237 Received shutdown signal, test time was about 10.000000 seconds 00:18:36.237 00:18:36.237 Latency(us) 00:18:36.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.237 =================================================================================================================== 00:18:36.237 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85640' 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85640 00:18:36.237 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85640 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 85456 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85456 ']' 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85456 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85456 00:18:36.580 killing process with pid 85456 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85456' 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85456 00:18:36.580 [2024-07-12 12:30:05.524567] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.580 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85456 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85678 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85678 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85678 ']' 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.838 12:30:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.838 [2024-07-12 12:30:05.815231] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:36.838 [2024-07-12 12:30:05.815353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.097 [2024-07-12 12:30:05.956194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.097 [2024-07-12 12:30:06.052457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.097 [2024-07-12 12:30:06.052518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.097 [2024-07-12 12:30:06.052546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.097 [2024-07-12 12:30:06.052555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.097 [2024-07-12 12:30:06.052563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.097 [2024-07-12 12:30:06.052596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.097 [2024-07-12 12:30:06.107555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9Vy62Z5vs2 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9Vy62Z5vs2 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.9Vy62Z5vs2 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9Vy62Z5vs2 00:18:38.033 12:30:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:38.033 [2024-07-12 12:30:07.088365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.033 12:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:38.292 12:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.551 [2024-07-12 12:30:07.552461] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.551 [2024-07-12 12:30:07.552744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.551 12:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.815 malloc0 00:18:38.815 12:30:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:39.073 12:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:18:39.363 [2024-07-12 12:30:08.296202] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:39.363 [2024-07-12 12:30:08.296254] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:39.363 [2024-07-12 12:30:08.296290] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:39.363 request: 00:18:39.363 { 00:18:39.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.363 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.363 "psk": "/tmp/tmp.9Vy62Z5vs2", 00:18:39.363 "method": "nvmf_subsystem_add_host", 00:18:39.363 "req_id": 1 00:18:39.363 } 00:18:39.363 Got JSON-RPC error response 00:18:39.363 response: 00:18:39.363 { 00:18:39.363 "code": -32603, 00:18:39.363 "message": "Internal error" 00:18:39.363 } 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 85678 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85678 ']' 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85678 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85678 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.363 killing process with pid 85678 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85678' 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85678 00:18:39.363 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85678 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9Vy62Z5vs2 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85743 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85743 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85743 ']' 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.666 12:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.666 [2024-07-12 12:30:08.615955] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:39.666 [2024-07-12 12:30:08.616197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.924 [2024-07-12 12:30:08.749457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.924 [2024-07-12 12:30:08.838550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.924 [2024-07-12 12:30:08.838604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.924 [2024-07-12 12:30:08.838632] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.924 [2024-07-12 12:30:08.838641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.924 [2024-07-12 12:30:08.838648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.924 [2024-07-12 12:30:08.838677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.924 [2024-07-12 12:30:08.893018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9Vy62Z5vs2 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9Vy62Z5vs2 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.856 [2024-07-12 12:30:09.915717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.856 12:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:41.113 12:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:41.370 [2024-07-12 12:30:10.379805] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.371 [2024-07-12 12:30:10.380017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.371 12:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:41.628 malloc0 00:18:41.628 12:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:41.884 12:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:18:42.141 [2024-07-12 12:30:11.090891] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=85797 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 85797 /var/tmp/bdevperf.sock 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85797 ']' 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.141 12:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.141 [2024-07-12 12:30:11.167164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:42.141 [2024-07-12 12:30:11.167550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85797 ] 00:18:42.397 [2024-07-12 12:30:11.312610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.397 [2024-07-12 12:30:11.410195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.397 [2024-07-12 12:30:11.465121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:43.329 12:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.329 12:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:43.329 12:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:18:43.587 [2024-07-12 12:30:12.425647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.587 [2024-07-12 12:30:12.425805] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:43.587 TLSTESTn1 00:18:43.587 12:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:43.845 12:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:43.845 "subsystems": [ 00:18:43.845 { 00:18:43.845 "subsystem": "keyring", 00:18:43.845 "config": [] 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "subsystem": "iobuf", 00:18:43.845 "config": [ 00:18:43.845 { 00:18:43.845 "method": "iobuf_set_options", 00:18:43.845 "params": { 00:18:43.845 "small_pool_count": 8192, 00:18:43.845 "large_pool_count": 1024, 00:18:43.845 "small_bufsize": 8192, 00:18:43.845 "large_bufsize": 135168 00:18:43.845 } 00:18:43.845 } 00:18:43.845 ] 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "subsystem": "sock", 00:18:43.845 "config": [ 00:18:43.845 { 00:18:43.845 "method": "sock_set_default_impl", 00:18:43.845 "params": { 00:18:43.845 "impl_name": "uring" 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "sock_impl_set_options", 00:18:43.845 "params": { 00:18:43.845 "impl_name": "ssl", 00:18:43.845 "recv_buf_size": 4096, 00:18:43.845 "send_buf_size": 4096, 00:18:43.845 "enable_recv_pipe": true, 00:18:43.845 "enable_quickack": false, 00:18:43.845 "enable_placement_id": 0, 00:18:43.845 "enable_zerocopy_send_server": true, 00:18:43.845 "enable_zerocopy_send_client": false, 00:18:43.845 "zerocopy_threshold": 0, 00:18:43.845 "tls_version": 0, 00:18:43.845 "enable_ktls": false 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "sock_impl_set_options", 00:18:43.845 "params": { 00:18:43.845 "impl_name": "posix", 00:18:43.845 "recv_buf_size": 2097152, 00:18:43.845 "send_buf_size": 2097152, 00:18:43.845 "enable_recv_pipe": true, 00:18:43.845 "enable_quickack": false, 00:18:43.845 "enable_placement_id": 0, 00:18:43.845 "enable_zerocopy_send_server": true, 00:18:43.845 "enable_zerocopy_send_client": false, 00:18:43.845 "zerocopy_threshold": 0, 00:18:43.845 "tls_version": 0, 00:18:43.845 "enable_ktls": false 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "sock_impl_set_options", 00:18:43.845 "params": { 00:18:43.845 "impl_name": "uring", 00:18:43.845 "recv_buf_size": 2097152, 00:18:43.845 "send_buf_size": 2097152, 00:18:43.845 "enable_recv_pipe": true, 00:18:43.845 "enable_quickack": false, 00:18:43.845 "enable_placement_id": 0, 00:18:43.845 "enable_zerocopy_send_server": false, 00:18:43.845 "enable_zerocopy_send_client": false, 00:18:43.845 "zerocopy_threshold": 0, 00:18:43.845 "tls_version": 0, 00:18:43.845 "enable_ktls": false 00:18:43.845 } 00:18:43.845 } 00:18:43.845 ] 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "subsystem": "vmd", 00:18:43.845 "config": [] 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "subsystem": "accel", 00:18:43.845 "config": [ 00:18:43.845 { 00:18:43.845 "method": "accel_set_options", 00:18:43.845 "params": { 00:18:43.845 "small_cache_size": 128, 00:18:43.845 "large_cache_size": 16, 00:18:43.845 "task_count": 2048, 00:18:43.845 "sequence_count": 2048, 00:18:43.845 "buf_count": 2048 00:18:43.845 } 00:18:43.845 } 00:18:43.845 ] 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "subsystem": "bdev", 00:18:43.845 "config": [ 00:18:43.845 { 00:18:43.845 "method": "bdev_set_options", 00:18:43.845 "params": { 00:18:43.845 "bdev_io_pool_size": 65535, 00:18:43.845 "bdev_io_cache_size": 256, 00:18:43.845 "bdev_auto_examine": true, 00:18:43.845 "iobuf_small_cache_size": 128, 00:18:43.845 "iobuf_large_cache_size": 16 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "bdev_raid_set_options", 00:18:43.845 "params": { 00:18:43.845 "process_window_size_kb": 1024 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "bdev_iscsi_set_options", 00:18:43.845 "params": { 00:18:43.845 "timeout_sec": 30 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "bdev_nvme_set_options", 00:18:43.845 "params": { 00:18:43.845 "action_on_timeout": "none", 00:18:43.845 "timeout_us": 0, 00:18:43.845 "timeout_admin_us": 0, 00:18:43.845 "keep_alive_timeout_ms": 10000, 00:18:43.845 "arbitration_burst": 0, 00:18:43.845 "low_priority_weight": 0, 00:18:43.845 "medium_priority_weight": 0, 00:18:43.845 "high_priority_weight": 0, 00:18:43.845 "nvme_adminq_poll_period_us": 10000, 00:18:43.845 "nvme_ioq_poll_period_us": 0, 00:18:43.845 "io_queue_requests": 0, 00:18:43.845 "delay_cmd_submit": true, 00:18:43.845 "transport_retry_count": 4, 00:18:43.845 "bdev_retry_count": 3, 00:18:43.845 "transport_ack_timeout": 0, 00:18:43.845 "ctrlr_loss_timeout_sec": 0, 00:18:43.845 "reconnect_delay_sec": 0, 00:18:43.845 "fast_io_fail_timeout_sec": 0, 00:18:43.845 "disable_auto_failback": false, 00:18:43.845 "generate_uuids": false, 00:18:43.845 "transport_tos": 0, 00:18:43.845 "nvme_error_stat": false, 00:18:43.845 "rdma_srq_size": 0, 00:18:43.845 "io_path_stat": false, 00:18:43.845 "allow_accel_sequence": false, 00:18:43.845 "rdma_max_cq_size": 0, 00:18:43.845 "rdma_cm_event_timeout_ms": 0, 00:18:43.845 "dhchap_digests": [ 00:18:43.845 "sha256", 00:18:43.845 "sha384", 00:18:43.845 "sha512" 00:18:43.845 ], 00:18:43.845 "dhchap_dhgroups": [ 00:18:43.845 "null", 00:18:43.845 "ffdhe2048", 00:18:43.845 "ffdhe3072", 00:18:43.845 "ffdhe4096", 00:18:43.845 "ffdhe6144", 00:18:43.845 "ffdhe8192" 00:18:43.845 ] 00:18:43.845 } 00:18:43.845 }, 00:18:43.845 { 00:18:43.845 "method": "bdev_nvme_set_hotplug", 00:18:43.846 "params": { 00:18:43.846 "period_us": 100000, 00:18:43.846 "enable": false 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "bdev_malloc_create", 00:18:43.846 "params": { 00:18:43.846 "name": "malloc0", 00:18:43.846 "num_blocks": 8192, 00:18:43.846 "block_size": 4096, 00:18:43.846 "physical_block_size": 4096, 00:18:43.846 "uuid": "6dac94bb-c1ab-4900-b0a3-29b941c3763e", 00:18:43.846 "optimal_io_boundary": 0 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "bdev_wait_for_examine" 00:18:43.846 } 00:18:43.846 ] 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "subsystem": "nbd", 00:18:43.846 "config": [] 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "subsystem": "scheduler", 00:18:43.846 "config": [ 00:18:43.846 { 00:18:43.846 "method": "framework_set_scheduler", 00:18:43.846 "params": { 00:18:43.846 "name": "static" 00:18:43.846 } 00:18:43.846 } 00:18:43.846 ] 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "subsystem": "nvmf", 00:18:43.846 "config": [ 00:18:43.846 { 00:18:43.846 "method": "nvmf_set_config", 00:18:43.846 "params": { 00:18:43.846 "discovery_filter": "match_any", 00:18:43.846 "admin_cmd_passthru": { 00:18:43.846 "identify_ctrlr": false 00:18:43.846 } 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_set_max_subsystems", 00:18:43.846 "params": { 00:18:43.846 "max_subsystems": 1024 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_set_crdt", 00:18:43.846 "params": { 00:18:43.846 "crdt1": 0, 00:18:43.846 "crdt2": 0, 00:18:43.846 "crdt3": 0 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_create_transport", 00:18:43.846 "params": { 00:18:43.846 "trtype": "TCP", 00:18:43.846 "max_queue_depth": 128, 00:18:43.846 "max_io_qpairs_per_ctrlr": 127, 00:18:43.846 "in_capsule_data_size": 4096, 00:18:43.846 "max_io_size": 131072, 00:18:43.846 "io_unit_size": 131072, 00:18:43.846 "max_aq_depth": 128, 00:18:43.846 "num_shared_buffers": 511, 00:18:43.846 "buf_cache_size": 4294967295, 00:18:43.846 "dif_insert_or_strip": false, 00:18:43.846 "zcopy": false, 00:18:43.846 "c2h_success": false, 00:18:43.846 "sock_priority": 0, 00:18:43.846 "abort_timeout_sec": 1, 00:18:43.846 "ack_timeout": 0, 00:18:43.846 "data_wr_pool_size": 0 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_create_subsystem", 00:18:43.846 "params": { 00:18:43.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.846 "allow_any_host": false, 00:18:43.846 "serial_number": "SPDK00000000000001", 00:18:43.846 "model_number": "SPDK bdev Controller", 00:18:43.846 "max_namespaces": 10, 00:18:43.846 "min_cntlid": 1, 00:18:43.846 "max_cntlid": 65519, 00:18:43.846 "ana_reporting": false 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_subsystem_add_host", 00:18:43.846 "params": { 00:18:43.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.846 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.846 "psk": "/tmp/tmp.9Vy62Z5vs2" 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_subsystem_add_ns", 00:18:43.846 "params": { 00:18:43.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.846 "namespace": { 00:18:43.846 "nsid": 1, 00:18:43.846 "bdev_name": "malloc0", 00:18:43.846 "nguid": "6DAC94BBC1AB4900B0A329B941C3763E", 00:18:43.846 "uuid": "6dac94bb-c1ab-4900-b0a3-29b941c3763e", 00:18:43.846 "no_auto_visible": false 00:18:43.846 } 00:18:43.846 } 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "method": "nvmf_subsystem_add_listener", 00:18:43.846 "params": { 00:18:43.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.846 "listen_address": { 00:18:43.846 "trtype": "TCP", 00:18:43.846 "adrfam": "IPv4", 00:18:43.846 "traddr": "10.0.0.2", 00:18:43.846 "trsvcid": "4420" 00:18:43.846 }, 00:18:43.846 "secure_channel": true 00:18:43.846 } 00:18:43.846 } 00:18:43.846 ] 00:18:43.846 } 00:18:43.846 ] 00:18:43.846 }' 00:18:43.846 12:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:44.107 12:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:44.107 "subsystems": [ 00:18:44.107 { 00:18:44.107 "subsystem": "keyring", 00:18:44.107 "config": [] 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "subsystem": "iobuf", 00:18:44.107 "config": [ 00:18:44.107 { 00:18:44.107 "method": "iobuf_set_options", 00:18:44.107 "params": { 00:18:44.107 "small_pool_count": 8192, 00:18:44.107 "large_pool_count": 1024, 00:18:44.107 "small_bufsize": 8192, 00:18:44.107 "large_bufsize": 135168 00:18:44.107 } 00:18:44.107 } 00:18:44.107 ] 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "subsystem": "sock", 00:18:44.107 "config": [ 00:18:44.107 { 00:18:44.107 "method": "sock_set_default_impl", 00:18:44.107 "params": { 00:18:44.107 "impl_name": "uring" 00:18:44.107 } 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "method": "sock_impl_set_options", 00:18:44.107 "params": { 00:18:44.107 "impl_name": "ssl", 00:18:44.107 "recv_buf_size": 4096, 00:18:44.107 "send_buf_size": 4096, 00:18:44.107 "enable_recv_pipe": true, 00:18:44.107 "enable_quickack": false, 00:18:44.107 "enable_placement_id": 0, 00:18:44.107 "enable_zerocopy_send_server": true, 00:18:44.107 "enable_zerocopy_send_client": false, 00:18:44.107 "zerocopy_threshold": 0, 00:18:44.107 "tls_version": 0, 00:18:44.107 "enable_ktls": false 00:18:44.107 } 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "method": "sock_impl_set_options", 00:18:44.107 "params": { 00:18:44.107 "impl_name": "posix", 00:18:44.107 "recv_buf_size": 2097152, 00:18:44.107 "send_buf_size": 2097152, 00:18:44.107 "enable_recv_pipe": true, 00:18:44.107 "enable_quickack": false, 00:18:44.107 "enable_placement_id": 0, 00:18:44.107 "enable_zerocopy_send_server": true, 00:18:44.107 "enable_zerocopy_send_client": false, 00:18:44.107 "zerocopy_threshold": 0, 00:18:44.107 "tls_version": 0, 00:18:44.107 "enable_ktls": false 00:18:44.107 } 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "method": "sock_impl_set_options", 00:18:44.107 "params": { 00:18:44.107 "impl_name": "uring", 00:18:44.107 "recv_buf_size": 2097152, 00:18:44.107 "send_buf_size": 2097152, 00:18:44.107 "enable_recv_pipe": true, 00:18:44.107 "enable_quickack": false, 00:18:44.107 "enable_placement_id": 0, 00:18:44.107 "enable_zerocopy_send_server": false, 00:18:44.107 "enable_zerocopy_send_client": false, 00:18:44.107 "zerocopy_threshold": 0, 00:18:44.107 "tls_version": 0, 00:18:44.107 "enable_ktls": false 00:18:44.107 } 00:18:44.107 } 00:18:44.107 ] 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "subsystem": "vmd", 00:18:44.107 "config": [] 00:18:44.107 }, 00:18:44.107 { 00:18:44.107 "subsystem": "accel", 00:18:44.107 "config": [ 00:18:44.107 { 00:18:44.107 "method": "accel_set_options", 00:18:44.108 "params": { 00:18:44.108 "small_cache_size": 128, 00:18:44.108 "large_cache_size": 16, 00:18:44.108 "task_count": 2048, 00:18:44.108 "sequence_count": 2048, 00:18:44.108 "buf_count": 2048 00:18:44.108 } 00:18:44.108 } 00:18:44.108 ] 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "subsystem": "bdev", 00:18:44.108 "config": [ 00:18:44.108 { 00:18:44.108 "method": "bdev_set_options", 00:18:44.108 "params": { 00:18:44.108 "bdev_io_pool_size": 65535, 00:18:44.108 "bdev_io_cache_size": 256, 00:18:44.108 "bdev_auto_examine": true, 00:18:44.108 "iobuf_small_cache_size": 128, 00:18:44.108 "iobuf_large_cache_size": 16 00:18:44.108 } 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "method": "bdev_raid_set_options", 00:18:44.108 "params": { 00:18:44.108 "process_window_size_kb": 1024 00:18:44.108 } 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "method": "bdev_iscsi_set_options", 00:18:44.108 "params": { 00:18:44.108 "timeout_sec": 30 00:18:44.108 } 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "method": "bdev_nvme_set_options", 00:18:44.108 "params": { 00:18:44.108 "action_on_timeout": "none", 00:18:44.108 "timeout_us": 0, 00:18:44.108 "timeout_admin_us": 0, 00:18:44.108 "keep_alive_timeout_ms": 10000, 00:18:44.108 "arbitration_burst": 0, 00:18:44.108 "low_priority_weight": 0, 00:18:44.108 "medium_priority_weight": 0, 00:18:44.108 "high_priority_weight": 0, 00:18:44.108 "nvme_adminq_poll_period_us": 10000, 00:18:44.108 "nvme_ioq_poll_period_us": 0, 00:18:44.108 "io_queue_requests": 512, 00:18:44.108 "delay_cmd_submit": true, 00:18:44.108 "transport_retry_count": 4, 00:18:44.108 "bdev_retry_count": 3, 00:18:44.108 "transport_ack_timeout": 0, 00:18:44.108 "ctrlr_loss_timeout_sec": 0, 00:18:44.108 "reconnect_delay_sec": 0, 00:18:44.108 "fast_io_fail_timeout_sec": 0, 00:18:44.108 "disable_auto_failback": false, 00:18:44.108 "generate_uuids": false, 00:18:44.108 "transport_tos": 0, 00:18:44.108 "nvme_error_stat": false, 00:18:44.108 "rdma_srq_size": 0, 00:18:44.108 "io_path_stat": false, 00:18:44.108 "allow_accel_sequence": false, 00:18:44.108 "rdma_max_cq_size": 0, 00:18:44.108 "rdma_cm_event_timeout_ms": 0, 00:18:44.108 "dhchap_digests": [ 00:18:44.108 "sha256", 00:18:44.108 "sha384", 00:18:44.108 "sha512" 00:18:44.108 ], 00:18:44.108 "dhchap_dhgroups": [ 00:18:44.108 "null", 00:18:44.108 "ffdhe2048", 00:18:44.108 "ffdhe3072", 00:18:44.108 "ffdhe4096", 00:18:44.108 "ffdhe6144", 00:18:44.108 "ffdhe8192" 00:18:44.108 ] 00:18:44.108 } 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "method": "bdev_nvme_attach_controller", 00:18:44.108 "params": { 00:18:44.108 "name": "TLSTEST", 00:18:44.108 "trtype": "TCP", 00:18:44.108 "adrfam": "IPv4", 00:18:44.108 "traddr": "10.0.0.2", 00:18:44.108 "trsvcid": "4420", 00:18:44.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.108 "prchk_reftag": false, 00:18:44.108 "prchk_guard": false, 00:18:44.108 "ctrlr_loss_timeout_sec": 0, 00:18:44.108 "reconnect_delay_sec": 0, 00:18:44.108 "fast_io_fail_timeout_sec": 0, 00:18:44.108 "psk": "/tmp/tmp.9Vy62Z5vs2", 00:18:44.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.108 "hdgst": false, 00:18:44.108 "ddgst": false 00:18:44.108 } 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "method": "bdev_nvme_set_hotplug", 00:18:44.108 "params": { 00:18:44.108 "period_us": 100000, 00:18:44.108 "enable": false 00:18:44.108 } 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "method": "bdev_wait_for_examine" 00:18:44.108 } 00:18:44.108 ] 00:18:44.108 }, 00:18:44.108 { 00:18:44.108 "subsystem": "nbd", 00:18:44.108 "config": [] 00:18:44.108 } 00:18:44.108 ] 00:18:44.108 }' 00:18:44.108 12:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 85797 00:18:44.108 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85797 ']' 00:18:44.108 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85797 00:18:44.108 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:44.108 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.108 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85797 00:18:44.370 killing process with pid 85797 00:18:44.370 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.370 00:18:44.370 Latency(us) 00:18:44.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.370 =================================================================================================================== 00:18:44.370 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85797' 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85797 00:18:44.370 [2024-07-12 12:30:13.211552] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85797 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 85743 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85743 ']' 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85743 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85743 00:18:44.370 killing process with pid 85743 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85743' 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85743 00:18:44.370 [2024-07-12 12:30:13.442668] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:44.370 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85743 00:18:44.628 12:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:44.628 12:30:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.629 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.629 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.629 12:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:44.629 "subsystems": [ 00:18:44.629 { 00:18:44.629 "subsystem": "keyring", 00:18:44.629 "config": [] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "iobuf", 00:18:44.629 "config": [ 00:18:44.629 { 00:18:44.629 "method": "iobuf_set_options", 00:18:44.629 "params": { 00:18:44.629 "small_pool_count": 8192, 00:18:44.629 "large_pool_count": 1024, 00:18:44.629 "small_bufsize": 8192, 00:18:44.629 "large_bufsize": 135168 00:18:44.629 } 00:18:44.629 } 00:18:44.629 ] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "sock", 00:18:44.629 "config": [ 00:18:44.629 { 00:18:44.629 "method": "sock_set_default_impl", 00:18:44.629 "params": { 00:18:44.629 "impl_name": "uring" 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "sock_impl_set_options", 00:18:44.629 "params": { 00:18:44.629 "impl_name": "ssl", 00:18:44.629 "recv_buf_size": 4096, 00:18:44.629 "send_buf_size": 4096, 00:18:44.629 "enable_recv_pipe": true, 00:18:44.629 "enable_quickack": false, 00:18:44.629 "enable_placement_id": 0, 00:18:44.629 "enable_zerocopy_send_server": true, 00:18:44.629 "enable_zerocopy_send_client": false, 00:18:44.629 "zerocopy_threshold": 0, 00:18:44.629 "tls_version": 0, 00:18:44.629 "enable_ktls": false 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "sock_impl_set_options", 00:18:44.629 "params": { 00:18:44.629 "impl_name": "posix", 00:18:44.629 "recv_buf_size": 2097152, 00:18:44.629 "send_buf_size": 2097152, 00:18:44.629 "enable_recv_pipe": true, 00:18:44.629 "enable_quickack": false, 00:18:44.629 "enable_placement_id": 0, 00:18:44.629 "enable_zerocopy_send_server": true, 00:18:44.629 "enable_zerocopy_send_client": false, 00:18:44.629 "zerocopy_threshold": 0, 00:18:44.629 "tls_version": 0, 00:18:44.629 "enable_ktls": false 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "sock_impl_set_options", 00:18:44.629 "params": { 00:18:44.629 "impl_name": "uring", 00:18:44.629 "recv_buf_size": 2097152, 00:18:44.629 "send_buf_size": 2097152, 00:18:44.629 "enable_recv_pipe": true, 00:18:44.629 "enable_quickack": false, 00:18:44.629 "enable_placement_id": 0, 00:18:44.629 "enable_zerocopy_send_server": false, 00:18:44.629 "enable_zerocopy_send_client": false, 00:18:44.629 "zerocopy_threshold": 0, 00:18:44.629 "tls_version": 0, 00:18:44.629 "enable_ktls": false 00:18:44.629 } 00:18:44.629 } 00:18:44.629 ] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "vmd", 00:18:44.629 "config": [] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "accel", 00:18:44.629 "config": [ 00:18:44.629 { 00:18:44.629 "method": "accel_set_options", 00:18:44.629 "params": { 00:18:44.629 "small_cache_size": 128, 00:18:44.629 "large_cache_size": 16, 00:18:44.629 "task_count": 2048, 00:18:44.629 "sequence_count": 2048, 00:18:44.629 "buf_count": 2048 00:18:44.629 } 00:18:44.629 } 00:18:44.629 ] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "bdev", 00:18:44.629 "config": [ 00:18:44.629 { 00:18:44.629 "method": "bdev_set_options", 00:18:44.629 "params": { 00:18:44.629 "bdev_io_pool_size": 65535, 00:18:44.629 "bdev_io_cache_size": 256, 00:18:44.629 "bdev_auto_examine": true, 00:18:44.629 "iobuf_small_cache_size": 128, 00:18:44.629 "iobuf_large_cache_size": 16 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "bdev_raid_set_options", 00:18:44.629 "params": { 00:18:44.629 "process_window_size_kb": 1024 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "bdev_iscsi_set_options", 00:18:44.629 "params": { 00:18:44.629 "timeout_sec": 30 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "bdev_nvme_set_options", 00:18:44.629 "params": { 00:18:44.629 "action_on_timeout": "none", 00:18:44.629 "timeout_us": 0, 00:18:44.629 "timeout_admin_us": 0, 00:18:44.629 "keep_alive_timeout_ms": 10000, 00:18:44.629 "arbitration_burst": 0, 00:18:44.629 "low_priority_weight": 0, 00:18:44.629 "medium_priority_weight": 0, 00:18:44.629 "high_priority_weight": 0, 00:18:44.629 "nvme_adminq_poll_period_us": 10000, 00:18:44.629 "nvme_ioq_poll_period_us": 0, 00:18:44.629 "io_queue_requests": 0, 00:18:44.629 "delay_cmd_submit": true, 00:18:44.629 "transport_retry_count": 4, 00:18:44.629 "bdev_retry_count": 3, 00:18:44.629 "transport_ack_timeout": 0, 00:18:44.629 "ctrlr_loss_timeout_sec": 0, 00:18:44.629 "reconnect_delay_sec": 0, 00:18:44.629 "fast_io_fail_timeout_sec": 0, 00:18:44.629 "disable_auto_failback": false, 00:18:44.629 "generate_uuids": false, 00:18:44.629 "transport_tos": 0, 00:18:44.629 "nvme_error_stat": false, 00:18:44.629 "rdma_srq_size": 0, 00:18:44.629 "io_path_stat": false, 00:18:44.629 "allow_accel_sequence": false, 00:18:44.629 "rdma_max_cq_size": 0, 00:18:44.629 "rdma_cm_event_timeout_ms": 0, 00:18:44.629 "dhchap_digests": [ 00:18:44.629 "sha256", 00:18:44.629 "sha384", 00:18:44.629 "sha512" 00:18:44.629 ], 00:18:44.629 "dhchap_dhgroups": [ 00:18:44.629 "null", 00:18:44.629 "ffdhe2048", 00:18:44.629 "ffdhe3072", 00:18:44.629 "ffdhe4096", 00:18:44.629 "ffdhe6144", 00:18:44.629 "ffdhe8192" 00:18:44.629 ] 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "bdev_nvme_set_hotplug", 00:18:44.629 "params": { 00:18:44.629 "period_us": 100000, 00:18:44.629 "enable": false 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "bdev_malloc_create", 00:18:44.629 "params": { 00:18:44.629 "name": "malloc0", 00:18:44.629 "num_blocks": 8192, 00:18:44.629 "block_size": 4096, 00:18:44.629 "physical_block_size": 4096, 00:18:44.629 "uuid": "6dac94bb-c1ab-4900-b0a3-29b941c3763e", 00:18:44.629 "optimal_io_boundary": 0 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "bdev_wait_for_examine" 00:18:44.629 } 00:18:44.629 ] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "nbd", 00:18:44.629 "config": [] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "scheduler", 00:18:44.629 "config": [ 00:18:44.629 { 00:18:44.629 "method": "framework_set_scheduler", 00:18:44.629 "params": { 00:18:44.629 "name": "static" 00:18:44.629 } 00:18:44.629 } 00:18:44.629 ] 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "subsystem": "nvmf", 00:18:44.629 "config": [ 00:18:44.629 { 00:18:44.629 "method": "nvmf_set_config", 00:18:44.629 "params": { 00:18:44.629 "discovery_filter": "match_any", 00:18:44.629 "admin_cmd_passthru": { 00:18:44.629 "identify_ctrlr": false 00:18:44.629 } 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "nvmf_set_max_subsystems", 00:18:44.629 "params": { 00:18:44.629 "max_subsystems": 1024 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "nvmf_set_crdt", 00:18:44.629 "params": { 00:18:44.629 "crdt1": 0, 00:18:44.629 "crdt2": 0, 00:18:44.629 "crdt3": 0 00:18:44.629 } 00:18:44.629 }, 00:18:44.629 { 00:18:44.629 "method": "nvmf_create_transport", 00:18:44.629 "params": { 00:18:44.629 "trtype": "TCP", 00:18:44.629 "max_queue_depth": 128, 00:18:44.629 "max_io_qpairs_per_ctrlr": 127, 00:18:44.629 "in_capsule_data_size": 4096, 00:18:44.629 "max_io_size": 131072, 00:18:44.629 "io_unit_size": 131072, 00:18:44.629 "max_aq_depth": 128, 00:18:44.629 "num_shared_buffers": 511, 00:18:44.629 "buf_cache_size": 4294967295, 00:18:44.629 "dif_insert_or_strip": false, 00:18:44.629 "zcopy": false, 00:18:44.630 "c2h_success": false, 00:18:44.630 "sock_priority": 0, 00:18:44.630 "abort_timeout_sec": 1, 00:18:44.630 "ack_timeout": 0, 00:18:44.630 "data_wr_pool_size": 0 00:18:44.630 } 00:18:44.630 }, 00:18:44.630 { 00:18:44.630 "method": "nvmf_create_subsystem", 00:18:44.630 "params": { 00:18:44.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.630 "allow_any_host": false, 00:18:44.630 "serial_number": "SPDK00000000000001", 00:18:44.630 "model_number": "SPDK bdev Controller", 00:18:44.630 "max_namespaces": 10, 00:18:44.630 "min_cntlid": 1, 00:18:44.630 "max_cntlid": 65519, 00:18:44.630 "ana_reporting": false 00:18:44.630 } 00:18:44.630 }, 00:18:44.630 { 00:18:44.630 "method": "nvmf_subsystem_add_host", 00:18:44.630 "params": { 00:18:44.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.630 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.630 "psk": "/tmp/tmp.9Vy62Z5vs2" 00:18:44.630 } 00:18:44.630 }, 00:18:44.630 { 00:18:44.630 "method": "nvmf_subsystem_add_ns", 00:18:44.630 "params": { 00:18:44.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.630 "namespace": { 00:18:44.630 "nsid": 1, 00:18:44.630 "bdev_name": "malloc0", 00:18:44.630 "nguid": "6DAC94BBC1AB4900B0A329B941C3763E", 00:18:44.630 "uuid": "6dac94bb-c1ab-4900-b0a3-29b941c3763e", 00:18:44.630 "no_auto_visible": false 00:18:44.630 } 00:18:44.630 } 00:18:44.630 }, 00:18:44.630 { 00:18:44.630 "method": "nvmf_subsystem_add_listener", 00:18:44.630 "params": { 00:18:44.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.630 "listen_address": { 00:18:44.630 "trtype": "TCP", 00:18:44.630 "adrfam": "IPv4", 00:18:44.630 "traddr": "10.0.0.2", 00:18:44.630 "trsvcid": "4420" 00:18:44.630 }, 00:18:44.630 "secure_channel": true 00:18:44.630 } 00:18:44.630 } 00:18:44.630 ] 00:18:44.630 } 00:18:44.630 ] 00:18:44.630 }' 00:18:44.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85846 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85846 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85846 ']' 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.630 12:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.888 [2024-07-12 12:30:13.712804] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:44.888 [2024-07-12 12:30:13.712893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.888 [2024-07-12 12:30:13.847699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.888 [2024-07-12 12:30:13.932854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.888 [2024-07-12 12:30:13.932910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.888 [2024-07-12 12:30:13.932924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.888 [2024-07-12 12:30:13.932932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.888 [2024-07-12 12:30:13.932940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.888 [2024-07-12 12:30:13.933029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.146 [2024-07-12 12:30:14.099506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:45.146 [2024-07-12 12:30:14.164455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.146 [2024-07-12 12:30:14.180367] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:45.146 [2024-07-12 12:30:14.196375] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.146 [2024-07-12 12:30:14.196577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=85878 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 85878 /var/tmp/bdevperf.sock 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85878 ']' 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.712 12:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:45.712 "subsystems": [ 00:18:45.712 { 00:18:45.712 "subsystem": "keyring", 00:18:45.712 "config": [] 00:18:45.712 }, 00:18:45.712 { 00:18:45.712 "subsystem": "iobuf", 00:18:45.712 "config": [ 00:18:45.712 { 00:18:45.713 "method": "iobuf_set_options", 00:18:45.713 "params": { 00:18:45.713 "small_pool_count": 8192, 00:18:45.713 "large_pool_count": 1024, 00:18:45.713 "small_bufsize": 8192, 00:18:45.713 "large_bufsize": 135168 00:18:45.713 } 00:18:45.713 } 00:18:45.713 ] 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "subsystem": "sock", 00:18:45.713 "config": [ 00:18:45.713 { 00:18:45.713 "method": "sock_set_default_impl", 00:18:45.713 "params": { 00:18:45.713 "impl_name": "uring" 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "sock_impl_set_options", 00:18:45.713 "params": { 00:18:45.713 "impl_name": "ssl", 00:18:45.713 "recv_buf_size": 4096, 00:18:45.713 "send_buf_size": 4096, 00:18:45.713 "enable_recv_pipe": true, 00:18:45.713 "enable_quickack": false, 00:18:45.713 "enable_placement_id": 0, 00:18:45.713 "enable_zerocopy_send_server": true, 00:18:45.713 "enable_zerocopy_send_client": false, 00:18:45.713 "zerocopy_threshold": 0, 00:18:45.713 "tls_version": 0, 00:18:45.713 "enable_ktls": false 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "sock_impl_set_options", 00:18:45.713 "params": { 00:18:45.713 "impl_name": "posix", 00:18:45.713 "recv_buf_size": 2097152, 00:18:45.713 "send_buf_size": 2097152, 00:18:45.713 "enable_recv_pipe": true, 00:18:45.713 "enable_quickack": false, 00:18:45.713 "enable_placement_id": 0, 00:18:45.713 "enable_zerocopy_send_server": true, 00:18:45.713 "enable_zerocopy_send_client": false, 00:18:45.713 "zerocopy_threshold": 0, 00:18:45.713 "tls_version": 0, 00:18:45.713 "enable_ktls": false 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "sock_impl_set_options", 00:18:45.713 "params": { 00:18:45.713 "impl_name": "uring", 00:18:45.713 "recv_buf_size": 2097152, 00:18:45.713 "send_buf_size": 2097152, 00:18:45.713 "enable_recv_pipe": true, 00:18:45.713 "enable_quickack": false, 00:18:45.713 "enable_placement_id": 0, 00:18:45.713 "enable_zerocopy_send_server": false, 00:18:45.713 "enable_zerocopy_send_client": false, 00:18:45.713 "zerocopy_threshold": 0, 00:18:45.713 "tls_version": 0, 00:18:45.713 "enable_ktls": false 00:18:45.713 } 00:18:45.713 } 00:18:45.713 ] 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "subsystem": "vmd", 00:18:45.713 "config": [] 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "subsystem": "accel", 00:18:45.713 "config": [ 00:18:45.713 { 00:18:45.713 "method": "accel_set_options", 00:18:45.713 "params": { 00:18:45.713 "small_cache_size": 128, 00:18:45.713 "large_cache_size": 16, 00:18:45.713 "task_count": 2048, 00:18:45.713 "sequence_count": 2048, 00:18:45.713 "buf_count": 2048 00:18:45.713 } 00:18:45.713 } 00:18:45.713 ] 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "subsystem": "bdev", 00:18:45.713 "config": [ 00:18:45.713 { 00:18:45.713 "method": "bdev_set_options", 00:18:45.713 "params": { 00:18:45.713 "bdev_io_pool_size": 65535, 00:18:45.713 "bdev_io_cache_size": 256, 00:18:45.713 "bdev_auto_examine": true, 00:18:45.713 "iobuf_small_cache_size": 128, 00:18:45.713 "iobuf_large_cache_size": 16 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "bdev_raid_set_options", 00:18:45.713 "params": { 00:18:45.713 "process_window_size_kb": 1024 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "bdev_iscsi_set_options", 00:18:45.713 "params": { 00:18:45.713 "timeout_sec": 30 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "bdev_nvme_set_options", 00:18:45.713 "params": { 00:18:45.713 "action_on_timeout": "none", 00:18:45.713 "timeout_us": 0, 00:18:45.713 "timeout_admin_us": 0, 00:18:45.713 "keep_alive_timeout_ms": 10000, 00:18:45.713 "arbitration_burst": 0, 00:18:45.713 "low_priority_weight": 0, 00:18:45.713 "medium_priority_weight": 0, 00:18:45.713 "high_priority_weight": 0, 00:18:45.713 "nvme_adminq_poll_period_us": 10000, 00:18:45.713 "nvme_ioq_poll_period_us": 0, 00:18:45.713 "io_queue_requests": 512, 00:18:45.713 "delay_cmd_submit": true, 00:18:45.713 "transport_retry_count": 4, 00:18:45.713 "bdev_retry_count": 3, 00:18:45.713 "transport_ack_timeout": 0, 00:18:45.713 "ctrlr_loss_timeout_sec": 0, 00:18:45.713 "reconnect_delay_sec": 0, 00:18:45.713 "fast_io_fail_timeout_sec": 0, 00:18:45.713 "disable_auto_failback": false, 00:18:45.713 "generate_uuids": false, 00:18:45.713 "transport_tos": 0, 00:18:45.713 "nvme_error_stat": false, 00:18:45.713 "rdma_srq_size": 0, 00:18:45.713 "io_path_stat": false, 00:18:45.713 "allow_accel_sequence": false, 00:18:45.713 "rdma_max_cq_size": 0, 00:18:45.713 "rdma_cm_event_timeout_ms": 0, 00:18:45.713 "dhchap_digests": [ 00:18:45.713 "sha256", 00:18:45.713 "sha384", 00:18:45.713 "sha512" 00:18:45.713 ], 00:18:45.713 "dhchap_dhgroups": [ 00:18:45.713 "null", 00:18:45.713 "ffdhe2048", 00:18:45.713 "ffdhe3072", 00:18:45.713 "ffdhe4096", 00:18:45.713 "ffdhe6144", 00:18:45.713 "ffdhe8192" 00:18:45.713 ] 00:18:45.713 } 00:18:45.713 }, 00:18:45.713 { 00:18:45.713 "method": "bdev_nvme_attach_controller", 00:18:45.713 "params": { 00:18:45.713 "name": "TLSTEST", 00:18:45.713 "trtype": "TCP", 00:18:45.713 "adrfam": "IPv4", 00:18:45.713 "traddr": "10.0.0.2", 00:18:45.713 "trsvcid": "4420", 00:18:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.713 "prchk_reftag": false, 00:18:45.713 "prchk_guard": false, 00:18:45.713 "ctrlr_loss_timeout_sec": 0, 00:18:45.713 "reconnect_delay_sec": 0, 00:18:45.713 "fast_io_fail_timeout_sec": 0, 00:18:45.713 "psk": "/tmp/tmp.9Vy62Z5vs2", 00:18:45.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.713 "hdgst": false, 00:18:45.713 "ddgst": false 00:18:45.714 } 00:18:45.714 }, 00:18:45.714 { 00:18:45.714 "method": "bdev_nvme_set_hotplug", 00:18:45.714 "params": { 00:18:45.714 "period_us": 100000, 00:18:45.714 "enable": false 00:18:45.714 } 00:18:45.714 }, 00:18:45.714 { 00:18:45.714 "method": "bdev_wait_for_examine" 00:18:45.714 } 00:18:45.714 ] 00:18:45.714 }, 00:18:45.714 { 00:18:45.714 "subsystem": "nbd", 00:18:45.714 "config": [] 00:18:45.714 } 00:18:45.714 ] 00:18:45.714 }' 00:18:45.714 [2024-07-12 12:30:14.788697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:45.714 [2024-07-12 12:30:14.788819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85878 ] 00:18:45.971 [2024-07-12 12:30:14.933515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.272 [2024-07-12 12:30:15.058891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.272 [2024-07-12 12:30:15.196777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:46.272 [2024-07-12 12:30:15.233666] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.272 [2024-07-12 12:30:15.234049] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:46.860 12:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.860 12:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:46.860 12:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.860 Running I/O for 10 seconds... 00:18:56.825 00:18:56.825 Latency(us) 00:18:56.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.825 Verification LBA range: start 0x0 length 0x2000 00:18:56.825 TLSTESTn1 : 10.02 4030.39 15.74 0.00 0.00 31699.04 6404.65 24069.59 00:18:56.825 =================================================================================================================== 00:18:56.825 Total : 4030.39 15.74 0.00 0.00 31699.04 6404.65 24069.59 00:18:56.825 0 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 85878 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85878 ']' 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85878 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85878 00:18:56.825 killing process with pid 85878 00:18:56.825 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.825 00:18:56.825 Latency(us) 00:18:56.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.825 =================================================================================================================== 00:18:56.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85878' 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85878 00:18:56.825 [2024-07-12 12:30:25.905511] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:56.825 12:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85878 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 85846 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85846 ']' 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85846 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85846 00:18:57.084 killing process with pid 85846 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85846' 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85846 00:18:57.084 [2024-07-12 12:30:26.138636] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:57.084 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85846 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86011 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86011 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86011 ']' 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.343 12:30:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.343 [2024-07-12 12:30:26.411013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:57.343 [2024-07-12 12:30:26.411101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.607 [2024-07-12 12:30:26.542623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.607 [2024-07-12 12:30:26.638314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.607 [2024-07-12 12:30:26.638377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.607 [2024-07-12 12:30:26.638390] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.607 [2024-07-12 12:30:26.638398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.607 [2024-07-12 12:30:26.638406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.607 [2024-07-12 12:30:26.638436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.892 [2024-07-12 12:30:26.690990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9Vy62Z5vs2 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9Vy62Z5vs2 00:18:58.459 12:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:58.717 [2024-07-12 12:30:27.680944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.717 12:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:58.974 12:30:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:59.232 [2024-07-12 12:30:28.121001] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:59.232 [2024-07-12 12:30:28.121234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.232 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:59.490 malloc0 00:18:59.490 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:59.748 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9Vy62Z5vs2 00:19:00.007 [2024-07-12 12:30:28.836240] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=86066 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 86066 /var/tmp/bdevperf.sock 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86066 ']' 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.007 12:30:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.007 [2024-07-12 12:30:28.906820] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:00.007 [2024-07-12 12:30:28.906916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86066 ] 00:19:00.007 [2024-07-12 12:30:29.041087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.266 [2024-07-12 12:30:29.127758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.266 [2024-07-12 12:30:29.180450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:00.871 12:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.871 12:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:00.871 12:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9Vy62Z5vs2 00:19:01.129 12:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:01.388 [2024-07-12 12:30:30.360767] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.388 nvme0n1 00:19:01.388 12:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.647 Running I/O for 1 seconds... 00:19:02.582 00:19:02.582 Latency(us) 00:19:02.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.583 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:02.583 Verification LBA range: start 0x0 length 0x2000 00:19:02.583 nvme0n1 : 1.02 4030.56 15.74 0.00 0.00 31425.69 6762.12 27644.28 00:19:02.583 =================================================================================================================== 00:19:02.583 Total : 4030.56 15.74 0.00 0.00 31425.69 6762.12 27644.28 00:19:02.583 0 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 86066 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86066 ']' 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86066 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86066 00:19:02.583 killing process with pid 86066 00:19:02.583 Received shutdown signal, test time was about 1.000000 seconds 00:19:02.583 00:19:02.583 Latency(us) 00:19:02.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.583 =================================================================================================================== 00:19:02.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86066' 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86066 00:19:02.583 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86066 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 86011 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86011 ']' 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86011 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86011 00:19:02.841 killing process with pid 86011 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86011' 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86011 00:19:02.841 [2024-07-12 12:30:31.835229] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:02.841 12:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86011 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86117 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86117 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86117 ']' 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.100 12:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.100 [2024-07-12 12:30:32.110303] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:03.100 [2024-07-12 12:30:32.110388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.358 [2024-07-12 12:30:32.244833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.358 [2024-07-12 12:30:32.338698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.358 [2024-07-12 12:30:32.338784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.358 [2024-07-12 12:30:32.338796] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.358 [2024-07-12 12:30:32.338820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.358 [2024-07-12 12:30:32.338829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.358 [2024-07-12 12:30:32.338861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.358 [2024-07-12 12:30:32.392386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 [2024-07-12 12:30:33.094406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.292 malloc0 00:19:04.292 [2024-07-12 12:30:33.125408] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.292 [2024-07-12 12:30:33.125609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=86149 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 86149 /var/tmp/bdevperf.sock 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86149 ']' 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.292 12:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.292 [2024-07-12 12:30:33.197846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:04.292 [2024-07-12 12:30:33.197928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86149 ] 00:19:04.292 [2024-07-12 12:30:33.330747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.551 [2024-07-12 12:30:33.420508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.551 [2024-07-12 12:30:33.473333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:05.483 12:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.483 12:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:05.483 12:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9Vy62Z5vs2 00:19:05.484 12:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:05.741 [2024-07-12 12:30:34.749012] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.741 nvme0n1 00:19:05.998 12:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.998 Running I/O for 1 seconds... 00:19:06.930 00:19:06.930 Latency(us) 00:19:06.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.930 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:06.930 Verification LBA range: start 0x0 length 0x2000 00:19:06.930 nvme0n1 : 1.02 4106.30 16.04 0.00 0.00 30849.49 6494.02 26452.71 00:19:06.930 =================================================================================================================== 00:19:06.930 Total : 4106.30 16.04 0.00 0.00 30849.49 6494.02 26452.71 00:19:06.930 0 00:19:06.930 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:06.930 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.930 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.188 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.188 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:07.188 "subsystems": [ 00:19:07.188 { 00:19:07.188 "subsystem": "keyring", 00:19:07.188 "config": [ 00:19:07.188 { 00:19:07.188 "method": "keyring_file_add_key", 00:19:07.188 "params": { 00:19:07.188 "name": "key0", 00:19:07.188 "path": "/tmp/tmp.9Vy62Z5vs2" 00:19:07.188 } 00:19:07.188 } 00:19:07.188 ] 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "subsystem": "iobuf", 00:19:07.188 "config": [ 00:19:07.188 { 00:19:07.188 "method": "iobuf_set_options", 00:19:07.188 "params": { 00:19:07.188 "small_pool_count": 8192, 00:19:07.188 "large_pool_count": 1024, 00:19:07.188 "small_bufsize": 8192, 00:19:07.188 "large_bufsize": 135168 00:19:07.188 } 00:19:07.188 } 00:19:07.188 ] 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "subsystem": "sock", 00:19:07.188 "config": [ 00:19:07.188 { 00:19:07.188 "method": "sock_set_default_impl", 00:19:07.188 "params": { 00:19:07.188 "impl_name": "uring" 00:19:07.188 } 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "method": "sock_impl_set_options", 00:19:07.188 "params": { 00:19:07.188 "impl_name": "ssl", 00:19:07.188 "recv_buf_size": 4096, 00:19:07.188 "send_buf_size": 4096, 00:19:07.188 "enable_recv_pipe": true, 00:19:07.188 "enable_quickack": false, 00:19:07.188 "enable_placement_id": 0, 00:19:07.188 "enable_zerocopy_send_server": true, 00:19:07.188 "enable_zerocopy_send_client": false, 00:19:07.188 "zerocopy_threshold": 0, 00:19:07.188 "tls_version": 0, 00:19:07.188 "enable_ktls": false 00:19:07.188 } 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "method": "sock_impl_set_options", 00:19:07.188 "params": { 00:19:07.188 "impl_name": "posix", 00:19:07.188 "recv_buf_size": 2097152, 00:19:07.188 "send_buf_size": 2097152, 00:19:07.188 "enable_recv_pipe": true, 00:19:07.188 "enable_quickack": false, 00:19:07.188 "enable_placement_id": 0, 00:19:07.188 "enable_zerocopy_send_server": true, 00:19:07.188 "enable_zerocopy_send_client": false, 00:19:07.188 "zerocopy_threshold": 0, 00:19:07.188 "tls_version": 0, 00:19:07.188 "enable_ktls": false 00:19:07.188 } 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "method": "sock_impl_set_options", 00:19:07.188 "params": { 00:19:07.188 "impl_name": "uring", 00:19:07.188 "recv_buf_size": 2097152, 00:19:07.188 "send_buf_size": 2097152, 00:19:07.188 "enable_recv_pipe": true, 00:19:07.188 "enable_quickack": false, 00:19:07.188 "enable_placement_id": 0, 00:19:07.188 "enable_zerocopy_send_server": false, 00:19:07.188 "enable_zerocopy_send_client": false, 00:19:07.188 "zerocopy_threshold": 0, 00:19:07.188 "tls_version": 0, 00:19:07.188 "enable_ktls": false 00:19:07.188 } 00:19:07.188 } 00:19:07.188 ] 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "subsystem": "vmd", 00:19:07.188 "config": [] 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "subsystem": "accel", 00:19:07.188 "config": [ 00:19:07.188 { 00:19:07.188 "method": "accel_set_options", 00:19:07.188 "params": { 00:19:07.188 "small_cache_size": 128, 00:19:07.188 "large_cache_size": 16, 00:19:07.188 "task_count": 2048, 00:19:07.188 "sequence_count": 2048, 00:19:07.188 "buf_count": 2048 00:19:07.188 } 00:19:07.188 } 00:19:07.188 ] 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "subsystem": "bdev", 00:19:07.188 "config": [ 00:19:07.188 { 00:19:07.188 "method": "bdev_set_options", 00:19:07.188 "params": { 00:19:07.188 "bdev_io_pool_size": 65535, 00:19:07.188 "bdev_io_cache_size": 256, 00:19:07.188 "bdev_auto_examine": true, 00:19:07.188 "iobuf_small_cache_size": 128, 00:19:07.188 "iobuf_large_cache_size": 16 00:19:07.188 } 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "method": "bdev_raid_set_options", 00:19:07.188 "params": { 00:19:07.188 "process_window_size_kb": 1024 00:19:07.188 } 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "method": "bdev_iscsi_set_options", 00:19:07.188 "params": { 00:19:07.188 "timeout_sec": 30 00:19:07.188 } 00:19:07.188 }, 00:19:07.188 { 00:19:07.188 "method": "bdev_nvme_set_options", 00:19:07.188 "params": { 00:19:07.188 "action_on_timeout": "none", 00:19:07.188 "timeout_us": 0, 00:19:07.188 "timeout_admin_us": 0, 00:19:07.188 "keep_alive_timeout_ms": 10000, 00:19:07.188 "arbitration_burst": 0, 00:19:07.188 "low_priority_weight": 0, 00:19:07.188 "medium_priority_weight": 0, 00:19:07.188 "high_priority_weight": 0, 00:19:07.188 "nvme_adminq_poll_period_us": 10000, 00:19:07.188 "nvme_ioq_poll_period_us": 0, 00:19:07.188 "io_queue_requests": 0, 00:19:07.188 "delay_cmd_submit": true, 00:19:07.188 "transport_retry_count": 4, 00:19:07.188 "bdev_retry_count": 3, 00:19:07.188 "transport_ack_timeout": 0, 00:19:07.188 "ctrlr_loss_timeout_sec": 0, 00:19:07.188 "reconnect_delay_sec": 0, 00:19:07.188 "fast_io_fail_timeout_sec": 0, 00:19:07.188 "disable_auto_failback": false, 00:19:07.189 "generate_uuids": false, 00:19:07.189 "transport_tos": 0, 00:19:07.189 "nvme_error_stat": false, 00:19:07.189 "rdma_srq_size": 0, 00:19:07.189 "io_path_stat": false, 00:19:07.189 "allow_accel_sequence": false, 00:19:07.189 "rdma_max_cq_size": 0, 00:19:07.189 "rdma_cm_event_timeout_ms": 0, 00:19:07.189 "dhchap_digests": [ 00:19:07.189 "sha256", 00:19:07.189 "sha384", 00:19:07.189 "sha512" 00:19:07.189 ], 00:19:07.189 "dhchap_dhgroups": [ 00:19:07.189 "null", 00:19:07.189 "ffdhe2048", 00:19:07.189 "ffdhe3072", 00:19:07.189 "ffdhe4096", 00:19:07.189 "ffdhe6144", 00:19:07.189 "ffdhe8192" 00:19:07.189 ] 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "bdev_nvme_set_hotplug", 00:19:07.189 "params": { 00:19:07.189 "period_us": 100000, 00:19:07.189 "enable": false 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "bdev_malloc_create", 00:19:07.189 "params": { 00:19:07.189 "name": "malloc0", 00:19:07.189 "num_blocks": 8192, 00:19:07.189 "block_size": 4096, 00:19:07.189 "physical_block_size": 4096, 00:19:07.189 "uuid": "bb3d1104-9d86-4ff9-b830-a468b14c6811", 00:19:07.189 "optimal_io_boundary": 0 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "bdev_wait_for_examine" 00:19:07.189 } 00:19:07.189 ] 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "subsystem": "nbd", 00:19:07.189 "config": [] 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "subsystem": "scheduler", 00:19:07.189 "config": [ 00:19:07.189 { 00:19:07.189 "method": "framework_set_scheduler", 00:19:07.189 "params": { 00:19:07.189 "name": "static" 00:19:07.189 } 00:19:07.189 } 00:19:07.189 ] 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "subsystem": "nvmf", 00:19:07.189 "config": [ 00:19:07.189 { 00:19:07.189 "method": "nvmf_set_config", 00:19:07.189 "params": { 00:19:07.189 "discovery_filter": "match_any", 00:19:07.189 "admin_cmd_passthru": { 00:19:07.189 "identify_ctrlr": false 00:19:07.189 } 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_set_max_subsystems", 00:19:07.189 "params": { 00:19:07.189 "max_subsystems": 1024 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_set_crdt", 00:19:07.189 "params": { 00:19:07.189 "crdt1": 0, 00:19:07.189 "crdt2": 0, 00:19:07.189 "crdt3": 0 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_create_transport", 00:19:07.189 "params": { 00:19:07.189 "trtype": "TCP", 00:19:07.189 "max_queue_depth": 128, 00:19:07.189 "max_io_qpairs_per_ctrlr": 127, 00:19:07.189 "in_capsule_data_size": 4096, 00:19:07.189 "max_io_size": 131072, 00:19:07.189 "io_unit_size": 131072, 00:19:07.189 "max_aq_depth": 128, 00:19:07.189 "num_shared_buffers": 511, 00:19:07.189 "buf_cache_size": 4294967295, 00:19:07.189 "dif_insert_or_strip": false, 00:19:07.189 "zcopy": false, 00:19:07.189 "c2h_success": false, 00:19:07.189 "sock_priority": 0, 00:19:07.189 "abort_timeout_sec": 1, 00:19:07.189 "ack_timeout": 0, 00:19:07.189 "data_wr_pool_size": 0 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_create_subsystem", 00:19:07.189 "params": { 00:19:07.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.189 "allow_any_host": false, 00:19:07.189 "serial_number": "00000000000000000000", 00:19:07.189 "model_number": "SPDK bdev Controller", 00:19:07.189 "max_namespaces": 32, 00:19:07.189 "min_cntlid": 1, 00:19:07.189 "max_cntlid": 65519, 00:19:07.189 "ana_reporting": false 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_subsystem_add_host", 00:19:07.189 "params": { 00:19:07.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.189 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.189 "psk": "key0" 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_subsystem_add_ns", 00:19:07.189 "params": { 00:19:07.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.189 "namespace": { 00:19:07.189 "nsid": 1, 00:19:07.189 "bdev_name": "malloc0", 00:19:07.189 "nguid": "BB3D11049D864FF9B830A468B14C6811", 00:19:07.189 "uuid": "bb3d1104-9d86-4ff9-b830-a468b14c6811", 00:19:07.189 "no_auto_visible": false 00:19:07.189 } 00:19:07.189 } 00:19:07.189 }, 00:19:07.189 { 00:19:07.189 "method": "nvmf_subsystem_add_listener", 00:19:07.189 "params": { 00:19:07.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.189 "listen_address": { 00:19:07.189 "trtype": "TCP", 00:19:07.189 "adrfam": "IPv4", 00:19:07.189 "traddr": "10.0.0.2", 00:19:07.189 "trsvcid": "4420" 00:19:07.189 }, 00:19:07.189 "secure_channel": true 00:19:07.189 } 00:19:07.189 } 00:19:07.189 ] 00:19:07.189 } 00:19:07.189 ] 00:19:07.189 }' 00:19:07.189 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:07.448 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:07.448 "subsystems": [ 00:19:07.448 { 00:19:07.448 "subsystem": "keyring", 00:19:07.448 "config": [ 00:19:07.448 { 00:19:07.448 "method": "keyring_file_add_key", 00:19:07.448 "params": { 00:19:07.448 "name": "key0", 00:19:07.448 "path": "/tmp/tmp.9Vy62Z5vs2" 00:19:07.448 } 00:19:07.448 } 00:19:07.448 ] 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "subsystem": "iobuf", 00:19:07.448 "config": [ 00:19:07.448 { 00:19:07.448 "method": "iobuf_set_options", 00:19:07.448 "params": { 00:19:07.448 "small_pool_count": 8192, 00:19:07.448 "large_pool_count": 1024, 00:19:07.448 "small_bufsize": 8192, 00:19:07.448 "large_bufsize": 135168 00:19:07.448 } 00:19:07.448 } 00:19:07.448 ] 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "subsystem": "sock", 00:19:07.448 "config": [ 00:19:07.448 { 00:19:07.448 "method": "sock_set_default_impl", 00:19:07.448 "params": { 00:19:07.448 "impl_name": "uring" 00:19:07.448 } 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "method": "sock_impl_set_options", 00:19:07.448 "params": { 00:19:07.448 "impl_name": "ssl", 00:19:07.448 "recv_buf_size": 4096, 00:19:07.448 "send_buf_size": 4096, 00:19:07.448 "enable_recv_pipe": true, 00:19:07.448 "enable_quickack": false, 00:19:07.448 "enable_placement_id": 0, 00:19:07.448 "enable_zerocopy_send_server": true, 00:19:07.448 "enable_zerocopy_send_client": false, 00:19:07.448 "zerocopy_threshold": 0, 00:19:07.448 "tls_version": 0, 00:19:07.448 "enable_ktls": false 00:19:07.448 } 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "method": "sock_impl_set_options", 00:19:07.448 "params": { 00:19:07.448 "impl_name": "posix", 00:19:07.448 "recv_buf_size": 2097152, 00:19:07.448 "send_buf_size": 2097152, 00:19:07.448 "enable_recv_pipe": true, 00:19:07.448 "enable_quickack": false, 00:19:07.448 "enable_placement_id": 0, 00:19:07.448 "enable_zerocopy_send_server": true, 00:19:07.448 "enable_zerocopy_send_client": false, 00:19:07.448 "zerocopy_threshold": 0, 00:19:07.448 "tls_version": 0, 00:19:07.448 "enable_ktls": false 00:19:07.448 } 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "method": "sock_impl_set_options", 00:19:07.448 "params": { 00:19:07.448 "impl_name": "uring", 00:19:07.448 "recv_buf_size": 2097152, 00:19:07.448 "send_buf_size": 2097152, 00:19:07.448 "enable_recv_pipe": true, 00:19:07.448 "enable_quickack": false, 00:19:07.448 "enable_placement_id": 0, 00:19:07.448 "enable_zerocopy_send_server": false, 00:19:07.448 "enable_zerocopy_send_client": false, 00:19:07.448 "zerocopy_threshold": 0, 00:19:07.448 "tls_version": 0, 00:19:07.448 "enable_ktls": false 00:19:07.448 } 00:19:07.448 } 00:19:07.448 ] 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "subsystem": "vmd", 00:19:07.448 "config": [] 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "subsystem": "accel", 00:19:07.448 "config": [ 00:19:07.448 { 00:19:07.448 "method": "accel_set_options", 00:19:07.448 "params": { 00:19:07.448 "small_cache_size": 128, 00:19:07.448 "large_cache_size": 16, 00:19:07.448 "task_count": 2048, 00:19:07.448 "sequence_count": 2048, 00:19:07.448 "buf_count": 2048 00:19:07.448 } 00:19:07.448 } 00:19:07.448 ] 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "subsystem": "bdev", 00:19:07.448 "config": [ 00:19:07.448 { 00:19:07.448 "method": "bdev_set_options", 00:19:07.448 "params": { 00:19:07.448 "bdev_io_pool_size": 65535, 00:19:07.448 "bdev_io_cache_size": 256, 00:19:07.448 "bdev_auto_examine": true, 00:19:07.448 "iobuf_small_cache_size": 128, 00:19:07.448 "iobuf_large_cache_size": 16 00:19:07.448 } 00:19:07.448 }, 00:19:07.448 { 00:19:07.448 "method": "bdev_raid_set_options", 00:19:07.449 "params": { 00:19:07.449 "process_window_size_kb": 1024 00:19:07.449 } 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "method": "bdev_iscsi_set_options", 00:19:07.449 "params": { 00:19:07.449 "timeout_sec": 30 00:19:07.449 } 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "method": "bdev_nvme_set_options", 00:19:07.449 "params": { 00:19:07.449 "action_on_timeout": "none", 00:19:07.449 "timeout_us": 0, 00:19:07.449 "timeout_admin_us": 0, 00:19:07.449 "keep_alive_timeout_ms": 10000, 00:19:07.449 "arbitration_burst": 0, 00:19:07.449 "low_priority_weight": 0, 00:19:07.449 "medium_priority_weight": 0, 00:19:07.449 "high_priority_weight": 0, 00:19:07.449 "nvme_adminq_poll_period_us": 10000, 00:19:07.449 "nvme_ioq_poll_period_us": 0, 00:19:07.449 "io_queue_requests": 512, 00:19:07.449 "delay_cmd_submit": true, 00:19:07.449 "transport_retry_count": 4, 00:19:07.449 "bdev_retry_count": 3, 00:19:07.449 "transport_ack_timeout": 0, 00:19:07.449 "ctrlr_loss_timeout_sec": 0, 00:19:07.449 "reconnect_delay_sec": 0, 00:19:07.449 "fast_io_fail_timeout_sec": 0, 00:19:07.449 "disable_auto_failback": false, 00:19:07.449 "generate_uuids": false, 00:19:07.449 "transport_tos": 0, 00:19:07.449 "nvme_error_stat": false, 00:19:07.449 "rdma_srq_size": 0, 00:19:07.449 "io_path_stat": false, 00:19:07.449 "allow_accel_sequence": false, 00:19:07.449 "rdma_max_cq_size": 0, 00:19:07.449 "rdma_cm_event_timeout_ms": 0, 00:19:07.449 "dhchap_digests": [ 00:19:07.449 "sha256", 00:19:07.449 "sha384", 00:19:07.449 "sha512" 00:19:07.449 ], 00:19:07.449 "dhchap_dhgroups": [ 00:19:07.449 "null", 00:19:07.449 "ffdhe2048", 00:19:07.449 "ffdhe3072", 00:19:07.449 "ffdhe4096", 00:19:07.449 "ffdhe6144", 00:19:07.449 "ffdhe8192" 00:19:07.449 ] 00:19:07.449 } 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "method": "bdev_nvme_attach_controller", 00:19:07.449 "params": { 00:19:07.449 "name": "nvme0", 00:19:07.449 "trtype": "TCP", 00:19:07.449 "adrfam": "IPv4", 00:19:07.449 "traddr": "10.0.0.2", 00:19:07.449 "trsvcid": "4420", 00:19:07.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.449 "prchk_reftag": false, 00:19:07.449 "prchk_guard": false, 00:19:07.449 "ctrlr_loss_timeout_sec": 0, 00:19:07.449 "reconnect_delay_sec": 0, 00:19:07.449 "fast_io_fail_timeout_sec": 0, 00:19:07.449 "psk": "key0", 00:19:07.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.449 "hdgst": false, 00:19:07.449 "ddgst": false 00:19:07.449 } 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "method": "bdev_nvme_set_hotplug", 00:19:07.449 "params": { 00:19:07.449 "period_us": 100000, 00:19:07.449 "enable": false 00:19:07.449 } 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "method": "bdev_enable_histogram", 00:19:07.449 "params": { 00:19:07.449 "name": "nvme0n1", 00:19:07.449 "enable": true 00:19:07.449 } 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "method": "bdev_wait_for_examine" 00:19:07.449 } 00:19:07.449 ] 00:19:07.449 }, 00:19:07.449 { 00:19:07.449 "subsystem": "nbd", 00:19:07.449 "config": [] 00:19:07.449 } 00:19:07.449 ] 00:19:07.449 }' 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 86149 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86149 ']' 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86149 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86149 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:07.449 killing process with pid 86149 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86149' 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86149 00:19:07.449 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.449 00:19:07.449 Latency(us) 00:19:07.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.449 =================================================================================================================== 00:19:07.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.449 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86149 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 86117 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86117 ']' 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86117 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86117 00:19:07.708 killing process with pid 86117 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86117' 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86117 00:19:07.708 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86117 00:19:07.967 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:07.967 12:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.967 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.967 12:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:07.967 "subsystems": [ 00:19:07.967 { 00:19:07.967 "subsystem": "keyring", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "keyring_file_add_key", 00:19:07.967 "params": { 00:19:07.967 "name": "key0", 00:19:07.967 "path": "/tmp/tmp.9Vy62Z5vs2" 00:19:07.967 } 00:19:07.967 } 00:19:07.967 ] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "iobuf", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "iobuf_set_options", 00:19:07.967 "params": { 00:19:07.967 "small_pool_count": 8192, 00:19:07.967 "large_pool_count": 1024, 00:19:07.967 "small_bufsize": 8192, 00:19:07.967 "large_bufsize": 135168 00:19:07.967 } 00:19:07.967 } 00:19:07.967 ] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "sock", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "sock_set_default_impl", 00:19:07.967 "params": { 00:19:07.967 "impl_name": "uring" 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "sock_impl_set_options", 00:19:07.967 "params": { 00:19:07.967 "impl_name": "ssl", 00:19:07.967 "recv_buf_size": 4096, 00:19:07.967 "send_buf_size": 4096, 00:19:07.967 "enable_recv_pipe": true, 00:19:07.967 "enable_quickack": false, 00:19:07.967 "enable_placement_id": 0, 00:19:07.967 "enable_zerocopy_send_server": true, 00:19:07.967 "enable_zerocopy_send_client": false, 00:19:07.967 "zerocopy_threshold": 0, 00:19:07.967 "tls_version": 0, 00:19:07.967 "enable_ktls": false 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "sock_impl_set_options", 00:19:07.967 "params": { 00:19:07.967 "impl_name": "posix", 00:19:07.967 "recv_buf_size": 2097152, 00:19:07.967 "send_buf_size": 2097152, 00:19:07.967 "enable_recv_pipe": true, 00:19:07.967 "enable_quickack": false, 00:19:07.967 "enable_placement_id": 0, 00:19:07.967 "enable_zerocopy_send_server": true, 00:19:07.967 "enable_zerocopy_send_client": false, 00:19:07.967 "zerocopy_threshold": 0, 00:19:07.967 "tls_version": 0, 00:19:07.967 "enable_ktls": false 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "sock_impl_set_options", 00:19:07.967 "params": { 00:19:07.967 "impl_name": "uring", 00:19:07.967 "recv_buf_size": 2097152, 00:19:07.967 "send_buf_size": 2097152, 00:19:07.967 "enable_recv_pipe": true, 00:19:07.967 "enable_quickack": false, 00:19:07.967 "enable_placement_id": 0, 00:19:07.967 "enable_zerocopy_send_server": false, 00:19:07.967 "enable_zerocopy_send_client": false, 00:19:07.967 "zerocopy_threshold": 0, 00:19:07.967 "tls_version": 0, 00:19:07.967 "enable_ktls": false 00:19:07.967 } 00:19:07.967 } 00:19:07.967 ] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "vmd", 00:19:07.967 "config": [] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "accel", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "accel_set_options", 00:19:07.967 "params": { 00:19:07.967 "small_cache_size": 128, 00:19:07.967 "large_cache_size": 16, 00:19:07.967 "task_count": 2048, 00:19:07.967 "sequence_count": 2048, 00:19:07.967 "buf_count": 2048 00:19:07.967 } 00:19:07.967 } 00:19:07.967 ] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "bdev", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "bdev_set_options", 00:19:07.967 "params": { 00:19:07.967 "bdev_io_pool_size": 65535, 00:19:07.967 "bdev_io_cache_size": 256, 00:19:07.967 "bdev_auto_examine": true, 00:19:07.967 "iobuf_small_cache_size": 128, 00:19:07.967 "iobuf_large_cache_size": 16 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "bdev_raid_set_options", 00:19:07.967 "params": { 00:19:07.967 "process_window_size_kb": 1024 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "bdev_iscsi_set_options", 00:19:07.967 "params": { 00:19:07.967 "timeout_sec": 30 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "bdev_nvme_set_options", 00:19:07.967 "params": { 00:19:07.967 "action_on_timeout": "none", 00:19:07.967 "timeout_us": 0, 00:19:07.967 "timeout_admin_us": 0, 00:19:07.967 "keep_alive_timeout_ms": 10000, 00:19:07.967 "arbitration_burst": 0, 00:19:07.967 "low_priority_weight": 0, 00:19:07.967 "medium_priority_weight": 0, 00:19:07.967 "high_priority_weight": 0, 00:19:07.967 "nvme_adminq_poll_period_us": 10000, 00:19:07.967 "nvme_ioq_poll_period_us": 0, 00:19:07.967 "io_queue_requests": 0, 00:19:07.967 "delay_cmd_submit": true, 00:19:07.967 "transport_retry_count": 4, 00:19:07.967 "bdev_retry_count": 3, 00:19:07.967 "transport_ack_timeout": 0, 00:19:07.967 "ctrlr_loss_timeout_sec": 0, 00:19:07.967 "reconnect_delay_sec": 0, 00:19:07.967 "fast_io_fail_timeout_sec": 0, 00:19:07.967 "disable_auto_failback": false, 00:19:07.967 "generate_uuids": false, 00:19:07.967 "transport_tos": 0, 00:19:07.967 "nvme_error_stat": false, 00:19:07.967 "rdma_srq_size": 0, 00:19:07.967 "io_path_stat": false, 00:19:07.967 "allow_accel_sequence": false, 00:19:07.967 "rdma_max_cq_size": 0, 00:19:07.967 "rdma_cm_event_timeout_ms": 0, 00:19:07.967 "dhchap_digests": [ 00:19:07.967 "sha256", 00:19:07.967 "sha384", 00:19:07.967 "sha512" 00:19:07.967 ], 00:19:07.967 "dhchap_dhgroups": [ 00:19:07.967 "null", 00:19:07.967 "ffdhe2048", 00:19:07.967 "ffdhe3072", 00:19:07.967 "ffdhe4096", 00:19:07.967 "ffdhe6144", 00:19:07.967 "ffdhe8192" 00:19:07.967 ] 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "bdev_nvme_set_hotplug", 00:19:07.967 "params": { 00:19:07.967 "period_us": 100000, 00:19:07.967 "enable": false 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "bdev_malloc_create", 00:19:07.967 "params": { 00:19:07.967 "name": "malloc0", 00:19:07.967 "num_blocks": 8192, 00:19:07.967 "block_size": 4096, 00:19:07.967 "physical_block_size": 4096, 00:19:07.967 "uuid": "bb3d1104-9d86-4ff9-b830-a468b14c6811", 00:19:07.967 "optimal_io_boundary": 0 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "bdev_wait_for_examine" 00:19:07.967 } 00:19:07.967 ] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "nbd", 00:19:07.967 "config": [] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "scheduler", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "framework_set_scheduler", 00:19:07.967 "params": { 00:19:07.967 "name": "static" 00:19:07.967 } 00:19:07.967 } 00:19:07.967 ] 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "subsystem": "nvmf", 00:19:07.967 "config": [ 00:19:07.967 { 00:19:07.967 "method": "nvmf_set_config", 00:19:07.967 "params": { 00:19:07.967 "discovery_filter": "match_any", 00:19:07.967 "admin_cmd_passthru": { 00:19:07.967 "identify_ctrlr": false 00:19:07.967 } 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "nvmf_set_max_subsystems", 00:19:07.967 "params": { 00:19:07.967 "max_subsystems": 1024 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "nvmf_set_crdt", 00:19:07.967 "params": { 00:19:07.967 "crdt1": 0, 00:19:07.967 "crdt2": 0, 00:19:07.967 "crdt3": 0 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "nvmf_create_transport", 00:19:07.967 "params": { 00:19:07.967 "trtype": "TCP", 00:19:07.967 "max_queue_depth": 128, 00:19:07.967 "max_io_qpairs_per_ctrlr": 127, 00:19:07.967 "in_capsule_data_size": 4096, 00:19:07.967 "max_io_size": 131072, 00:19:07.967 "io_unit_size": 131072, 00:19:07.967 "max_aq_depth": 128, 00:19:07.967 "num_shared_buffers": 511, 00:19:07.967 "buf_cache_size": 4294967295, 00:19:07.967 "dif_insert_or_strip": false, 00:19:07.967 "zcopy": false, 00:19:07.967 "c2h_success": false, 00:19:07.967 "sock_priority": 0, 00:19:07.967 "abort_timeout_sec": 1, 00:19:07.967 "ack_timeout": 0, 00:19:07.967 "data_wr_pool_size": 0 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "nvmf_create_subsystem", 00:19:07.967 "params": { 00:19:07.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.967 "allow_any_host": false, 00:19:07.967 "serial_number": "00000000000000000000", 00:19:07.967 "model_number": "SPDK bdev Controller", 00:19:07.967 "max_namespaces": 32, 00:19:07.967 "min_cntlid": 1, 00:19:07.967 "max_cntlid": 65519, 00:19:07.967 "ana_reporting": false 00:19:07.967 } 00:19:07.967 }, 00:19:07.967 { 00:19:07.967 "method": "nvmf_subsystem_add_host", 00:19:07.967 "params": { 00:19:07.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.967 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.968 "psk": "key0" 00:19:07.968 } 00:19:07.968 }, 00:19:07.968 { 00:19:07.968 "method": "nvmf_subsystem_add_ns", 00:19:07.968 "params": { 00:19:07.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.968 "namespace": { 00:19:07.968 "nsid": 1, 00:19:07.968 "bdev_name": "malloc0", 00:19:07.968 "nguid": "BB3D11049D864FF9B830A468B14C6811", 00:19:07.968 "uuid": "bb3d1104-9d86-4ff9-b830-a468b14c6811", 00:19:07.968 "no_auto_visible": false 00:19:07.968 } 00:19:07.968 } 00:19:07.968 }, 00:19:07.968 { 00:19:07.968 "method": "nvmf_subsystem_add_listener", 00:19:07.968 "params": { 00:19:07.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.968 "listen_address": { 00:19:07.968 "trtype": "TCP", 00:19:07.968 "adrfam": "IPv4", 00:19:07.968 "traddr": "10.0.0.2", 00:19:07.968 "trsvcid": "4420" 00:19:07.968 }, 00:19:07.968 "secure_channel": true 00:19:07.968 } 00:19:07.968 } 00:19:07.968 ] 00:19:07.968 } 00:19:07.968 ] 00:19:07.968 }' 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86210 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86210 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86210 ']' 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.968 12:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.968 [2024-07-12 12:30:37.023772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:07.968 [2024-07-12 12:30:37.024499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.226 [2024-07-12 12:30:37.164458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.226 [2024-07-12 12:30:37.262632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.226 [2024-07-12 12:30:37.262961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.226 [2024-07-12 12:30:37.263167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.226 [2024-07-12 12:30:37.263406] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.226 [2024-07-12 12:30:37.263594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.226 [2024-07-12 12:30:37.263782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.484 [2024-07-12 12:30:37.435112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:08.484 [2024-07-12 12:30:37.510000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.484 [2024-07-12 12:30:37.541913] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.484 [2024-07-12 12:30:37.542229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=86242 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 86242 /var/tmp/bdevperf.sock 00:19:09.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 86242 ']' 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.051 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.052 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.052 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.052 12:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.052 12:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:09.052 "subsystems": [ 00:19:09.052 { 00:19:09.052 "subsystem": "keyring", 00:19:09.052 "config": [ 00:19:09.052 { 00:19:09.052 "method": "keyring_file_add_key", 00:19:09.052 "params": { 00:19:09.052 "name": "key0", 00:19:09.052 "path": "/tmp/tmp.9Vy62Z5vs2" 00:19:09.052 } 00:19:09.052 } 00:19:09.052 ] 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "subsystem": "iobuf", 00:19:09.052 "config": [ 00:19:09.052 { 00:19:09.052 "method": "iobuf_set_options", 00:19:09.052 "params": { 00:19:09.052 "small_pool_count": 8192, 00:19:09.052 "large_pool_count": 1024, 00:19:09.052 "small_bufsize": 8192, 00:19:09.052 "large_bufsize": 135168 00:19:09.052 } 00:19:09.052 } 00:19:09.052 ] 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "subsystem": "sock", 00:19:09.052 "config": [ 00:19:09.052 { 00:19:09.052 "method": "sock_set_default_impl", 00:19:09.052 "params": { 00:19:09.052 "impl_name": "uring" 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "sock_impl_set_options", 00:19:09.052 "params": { 00:19:09.052 "impl_name": "ssl", 00:19:09.052 "recv_buf_size": 4096, 00:19:09.052 "send_buf_size": 4096, 00:19:09.052 "enable_recv_pipe": true, 00:19:09.052 "enable_quickack": false, 00:19:09.052 "enable_placement_id": 0, 00:19:09.052 "enable_zerocopy_send_server": true, 00:19:09.052 "enable_zerocopy_send_client": false, 00:19:09.052 "zerocopy_threshold": 0, 00:19:09.052 "tls_version": 0, 00:19:09.052 "enable_ktls": false 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "sock_impl_set_options", 00:19:09.052 "params": { 00:19:09.052 "impl_name": "posix", 00:19:09.052 "recv_buf_size": 2097152, 00:19:09.052 "send_buf_size": 2097152, 00:19:09.052 "enable_recv_pipe": true, 00:19:09.052 "enable_quickack": false, 00:19:09.052 "enable_placement_id": 0, 00:19:09.052 "enable_zerocopy_send_server": true, 00:19:09.052 "enable_zerocopy_send_client": false, 00:19:09.052 "zerocopy_threshold": 0, 00:19:09.052 "tls_version": 0, 00:19:09.052 "enable_ktls": false 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "sock_impl_set_options", 00:19:09.052 "params": { 00:19:09.052 "impl_name": "uring", 00:19:09.052 "recv_buf_size": 2097152, 00:19:09.052 "send_buf_size": 2097152, 00:19:09.052 "enable_recv_pipe": true, 00:19:09.052 "enable_quickack": false, 00:19:09.052 "enable_placement_id": 0, 00:19:09.052 "enable_zerocopy_send_server": false, 00:19:09.052 "enable_zerocopy_send_client": false, 00:19:09.052 "zerocopy_threshold": 0, 00:19:09.052 "tls_version": 0, 00:19:09.052 "enable_ktls": false 00:19:09.052 } 00:19:09.052 } 00:19:09.052 ] 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "subsystem": "vmd", 00:19:09.052 "config": [] 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "subsystem": "accel", 00:19:09.052 "config": [ 00:19:09.052 { 00:19:09.052 "method": "accel_set_options", 00:19:09.052 "params": { 00:19:09.052 "small_cache_size": 128, 00:19:09.052 "large_cache_size": 16, 00:19:09.052 "task_count": 2048, 00:19:09.052 "sequence_count": 2048, 00:19:09.052 "buf_count": 2048 00:19:09.052 } 00:19:09.052 } 00:19:09.052 ] 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "subsystem": "bdev", 00:19:09.052 "config": [ 00:19:09.052 { 00:19:09.052 "method": "bdev_set_options", 00:19:09.052 "params": { 00:19:09.052 "bdev_io_pool_size": 65535, 00:19:09.052 "bdev_io_cache_size": 256, 00:19:09.052 "bdev_auto_examine": true, 00:19:09.052 "iobuf_small_cache_size": 128, 00:19:09.052 "iobuf_large_cache_size": 16 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_raid_set_options", 00:19:09.052 "params": { 00:19:09.052 "process_window_size_kb": 1024 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_iscsi_set_options", 00:19:09.052 "params": { 00:19:09.052 "timeout_sec": 30 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_nvme_set_options", 00:19:09.052 "params": { 00:19:09.052 "action_on_timeout": "none", 00:19:09.052 "timeout_us": 0, 00:19:09.052 "timeout_admin_us": 0, 00:19:09.052 "keep_alive_timeout_ms": 10000, 00:19:09.052 "arbitration_burst": 0, 00:19:09.052 "low_priority_weight": 0, 00:19:09.052 "medium_priority_weight": 0, 00:19:09.052 "high_priority_weight": 0, 00:19:09.052 "nvme_adminq_poll_period_us": 10000, 00:19:09.052 "nvme_ioq_poll_period_us": 0, 00:19:09.052 "io_queue_requests": 512, 00:19:09.052 "delay_cmd_submit": true, 00:19:09.052 "transport_retry_count": 4, 00:19:09.052 "bdev_retry_count": 3, 00:19:09.052 "transport_ack_timeout": 0, 00:19:09.052 "ctrlr_loss_timeout_sec": 0, 00:19:09.052 "reconnect_delay_sec": 0, 00:19:09.052 "fast_io_fail_timeout_sec": 0, 00:19:09.052 "disable_auto_failback": false, 00:19:09.052 "generate_uuids": false, 00:19:09.052 "transport_tos": 0, 00:19:09.052 "nvme_error_stat": false, 00:19:09.052 "rdma_srq_size": 0, 00:19:09.052 "io_path_stat": false, 00:19:09.052 "allow_accel_sequence": false, 00:19:09.052 "rdma_max_cq_size": 0, 00:19:09.052 "rdma_cm_event_timeout_ms": 0, 00:19:09.052 "dhchap_digests": [ 00:19:09.052 "sha256", 00:19:09.052 "sha384", 00:19:09.052 "sha512" 00:19:09.052 ], 00:19:09.052 "dhchap_dhgroups": [ 00:19:09.052 "null", 00:19:09.052 "ffdhe2048", 00:19:09.052 "ffdhe3072", 00:19:09.052 "ffdhe4096", 00:19:09.052 "ffdhe6144", 00:19:09.052 "ffdhe8192" 00:19:09.052 ] 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_nvme_attach_controller", 00:19:09.052 "params": { 00:19:09.052 "name": "nvme0", 00:19:09.052 "trtype": "TCP", 00:19:09.052 "adrfam": "IPv4", 00:19:09.052 "traddr": "10.0.0.2", 00:19:09.052 "trsvcid": "4420", 00:19:09.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.052 "prchk_reftag": false, 00:19:09.052 "prchk_guard": false, 00:19:09.052 "ctrlr_loss_timeout_sec": 0, 00:19:09.052 "reconnect_delay_sec": 0, 00:19:09.052 "fast_io_fail_timeout_sec": 0, 00:19:09.052 "psk": "key0", 00:19:09.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.052 "hdgst": false, 00:19:09.052 "ddgst": false 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_nvme_set_hotplug", 00:19:09.052 "params": { 00:19:09.052 "period_us": 100000, 00:19:09.052 "enable": false 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_enable_histogram", 00:19:09.052 "params": { 00:19:09.052 "name": "nvme0n1", 00:19:09.052 "enable": true 00:19:09.052 } 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "method": "bdev_wait_for_examine" 00:19:09.052 } 00:19:09.052 ] 00:19:09.052 }, 00:19:09.052 { 00:19:09.052 "subsystem": "nbd", 00:19:09.052 "config": [] 00:19:09.052 } 00:19:09.052 ] 00:19:09.052 }' 00:19:09.052 12:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:09.052 [2024-07-12 12:30:38.015064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:09.052 [2024-07-12 12:30:38.015943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86242 ] 00:19:09.311 [2024-07-12 12:30:38.152157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.311 [2024-07-12 12:30:38.245096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.311 [2024-07-12 12:30:38.380671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:09.569 [2024-07-12 12:30:38.423864] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.136 12:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.136 12:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.136 12:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:10.136 12:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:10.396 12:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.396 12:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.396 Running I/O for 1 seconds... 00:19:11.330 00:19:11.330 Latency(us) 00:19:11.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.330 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:11.330 Verification LBA range: start 0x0 length 0x2000 00:19:11.330 nvme0n1 : 1.03 3956.83 15.46 0.00 0.00 31922.05 6911.07 18945.86 00:19:11.330 =================================================================================================================== 00:19:11.330 Total : 3956.83 15.46 0.00 0.00 31922.05 6911.07 18945.86 00:19:11.330 0 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:11.330 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:11.588 nvmf_trace.0 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 86242 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86242 ']' 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86242 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86242 00:19:11.588 killing process with pid 86242 00:19:11.588 Received shutdown signal, test time was about 1.000000 seconds 00:19:11.588 00:19:11.588 Latency(us) 00:19:11.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.588 =================================================================================================================== 00:19:11.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86242' 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86242 00:19:11.588 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86242 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.846 rmmod nvme_tcp 00:19:11.846 rmmod nvme_fabrics 00:19:11.846 rmmod nvme_keyring 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 86210 ']' 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 86210 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 86210 ']' 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 86210 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86210 00:19:11.846 killing process with pid 86210 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86210' 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 86210 00:19:11.846 12:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 86210 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hz773JhsDZ /tmp/tmp.epPsGXXWeE /tmp/tmp.9Vy62Z5vs2 00:19:12.105 00:19:12.105 real 1m26.241s 00:19:12.105 user 2m18.102s 00:19:12.105 sys 0m27.012s 00:19:12.105 ************************************ 00:19:12.105 END TEST nvmf_tls 00:19:12.105 ************************************ 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.105 12:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.105 12:30:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:12.105 12:30:41 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:12.105 12:30:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:12.105 12:30:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.105 12:30:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:12.105 ************************************ 00:19:12.105 START TEST nvmf_fips 00:19:12.105 ************************************ 00:19:12.105 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:12.364 * Looking for test storage... 00:19:12.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:12.364 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:12.365 Error setting digest 00:19:12.365 00B22E28F47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:12.365 00B22E28F47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.365 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.623 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:12.624 Cannot find device "nvmf_tgt_br" 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.624 Cannot find device "nvmf_tgt_br2" 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:12.624 Cannot find device "nvmf_tgt_br" 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:12.624 Cannot find device "nvmf_tgt_br2" 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.624 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:12.624 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:12.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:19:12.882 00:19:12.882 --- 10.0.0.2 ping statistics --- 00:19:12.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.882 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:12.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:12.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:12.882 00:19:12.882 --- 10.0.0.3 ping statistics --- 00:19:12.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.882 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:12.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:12.882 00:19:12.882 --- 10.0.0.1 ping statistics --- 00:19:12.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.882 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.882 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=86504 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 86504 00:19:12.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86504 ']' 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.883 12:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:12.883 [2024-07-12 12:30:41.927775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:12.883 [2024-07-12 12:30:41.927929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.141 [2024-07-12 12:30:42.065063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.141 [2024-07-12 12:30:42.156642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.141 [2024-07-12 12:30:42.156703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.141 [2024-07-12 12:30:42.156716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.141 [2024-07-12 12:30:42.156724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.141 [2024-07-12 12:30:42.156733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.141 [2024-07-12 12:30:42.156756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.141 [2024-07-12 12:30:42.210725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:14.077 12:30:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.334 [2024-07-12 12:30:43.181875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.334 [2024-07-12 12:30:43.197828] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.334 [2024-07-12 12:30:43.198005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.334 [2024-07-12 12:30:43.228984] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:14.334 malloc0 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=86548 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 86548 /var/tmp/bdevperf.sock 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 86548 ']' 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.334 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:14.334 [2024-07-12 12:30:43.318254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:14.334 [2024-07-12 12:30:43.318501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86548 ] 00:19:14.591 [2024-07-12 12:30:43.451658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.591 [2024-07-12 12:30:43.523859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.591 [2024-07-12 12:30:43.578278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:14.591 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.591 12:30:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:14.591 12:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:14.848 [2024-07-12 12:30:43.843794] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.848 [2024-07-12 12:30:43.843926] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.848 TLSTESTn1 00:19:15.104 12:30:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.104 Running I/O for 10 seconds... 00:19:25.070 00:19:25.070 Latency(us) 00:19:25.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.070 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.070 Verification LBA range: start 0x0 length 0x2000 00:19:25.070 TLSTESTn1 : 10.02 4005.46 15.65 0.00 0.00 31894.58 7238.75 40751.48 00:19:25.070 =================================================================================================================== 00:19:25.070 Total : 4005.46 15.65 0.00 0.00 31894.58 7238.75 40751.48 00:19:25.070 0 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:25.070 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:25.070 nvmf_trace.0 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 86548 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86548 ']' 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86548 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86548 00:19:25.328 killing process with pid 86548 00:19:25.328 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.328 00:19:25.328 Latency(us) 00:19:25.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.328 =================================================================================================================== 00:19:25.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86548' 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86548 00:19:25.328 [2024-07-12 12:30:54.188857] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86548 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.328 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.629 rmmod nvme_tcp 00:19:25.629 rmmod nvme_fabrics 00:19:25.629 rmmod nvme_keyring 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 86504 ']' 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 86504 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 86504 ']' 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 86504 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86504 00:19:25.629 killing process with pid 86504 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86504' 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 86504 00:19:25.629 [2024-07-12 12:30:54.526299] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:25.629 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 86504 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:19:25.889 ************************************ 00:19:25.889 END TEST nvmf_fips 00:19:25.889 ************************************ 00:19:25.889 00:19:25.889 real 0m13.638s 00:19:25.889 user 0m17.844s 00:19:25.889 sys 0m5.756s 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.889 12:30:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:25.889 12:30:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:19:25.889 12:30:54 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:25.889 12:30:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:25.889 12:30:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.889 12:30:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.889 ************************************ 00:19:25.889 START TEST nvmf_fuzz 00:19:25.889 ************************************ 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:25.889 * Looking for test storage... 00:19:25.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.889 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:26.149 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:26.149 Cannot find device "nvmf_tgt_br" 00:19:26.149 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:19:26.149 12:30:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:26.149 Cannot find device "nvmf_tgt_br2" 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:26.149 Cannot find device "nvmf_tgt_br" 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:26.149 Cannot find device "nvmf_tgt_br2" 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:26.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:26.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:26.149 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:26.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:26.408 00:19:26.408 --- 10.0.0.2 ping statistics --- 00:19:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.408 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:26.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:26.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:19:26.408 00:19:26.408 --- 10.0.0.3 ping statistics --- 00:19:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.408 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:26.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:26.408 00:19:26.408 --- 10.0.0.1 ping statistics --- 00:19:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.408 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86874 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86874 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 86874 ']' 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.408 12:30:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.344 Malloc0 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:19:27.344 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:19:27.911 Shutting down the fuzz application 00:19:27.911 12:30:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:19:28.169 Shutting down the fuzz application 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:28.169 rmmod nvme_tcp 00:19:28.169 rmmod nvme_fabrics 00:19:28.169 rmmod nvme_keyring 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 86874 ']' 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 86874 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 86874 ']' 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 86874 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.169 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86874 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:28.426 killing process with pid 86874 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86874' 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 86874 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 86874 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.426 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.685 12:30:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:28.685 12:30:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:19:28.685 00:19:28.685 real 0m2.689s 00:19:28.685 user 0m2.897s 00:19:28.685 sys 0m0.631s 00:19:28.685 ************************************ 00:19:28.685 END TEST nvmf_fuzz 00:19:28.685 ************************************ 00:19:28.685 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.685 12:30:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.685 12:30:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:28.685 12:30:57 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:28.685 12:30:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:28.685 12:30:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.685 12:30:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:28.685 ************************************ 00:19:28.685 START TEST nvmf_multiconnection 00:19:28.685 ************************************ 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:28.685 * Looking for test storage... 00:19:28.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.685 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:28.686 Cannot find device "nvmf_tgt_br" 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:28.686 Cannot find device "nvmf_tgt_br2" 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:28.686 Cannot find device "nvmf_tgt_br" 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:28.686 Cannot find device "nvmf_tgt_br2" 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:19:28.686 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:28.946 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:28.946 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.946 12:30:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:28.946 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:28.946 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:29.207 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:29.207 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:29.207 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:29.207 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:29.207 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:29.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:29.207 00:19:29.207 --- 10.0.0.2 ping statistics --- 00:19:29.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.208 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:29.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:29.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:29.208 00:19:29.208 --- 10.0.0.3 ping statistics --- 00:19:29.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.208 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:29.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:29.208 00:19:29.208 --- 10.0.0.1 ping statistics --- 00:19:29.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.208 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=87061 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 87061 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 87061 ']' 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.208 12:30:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:29.208 [2024-07-12 12:30:58.144409] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:29.208 [2024-07-12 12:30:58.144509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.208 [2024-07-12 12:30:58.281423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.467 [2024-07-12 12:30:58.373325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.467 [2024-07-12 12:30:58.373503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.467 [2024-07-12 12:30:58.373673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.467 [2024-07-12 12:30:58.373858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.467 [2024-07-12 12:30:58.373949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.467 [2024-07-12 12:30:58.374123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.467 [2024-07-12 12:30:58.374195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.467 [2024-07-12 12:30:58.374265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.467 [2024-07-12 12:30:58.374267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.467 [2024-07-12 12:30:58.428998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 [2024-07-12 12:30:59.202605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 Malloc1 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 [2024-07-12 12:30:59.275053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 Malloc2 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 Malloc3 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.402 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 Malloc4 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 Malloc5 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.403 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.661 Malloc6 00:19:30.661 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.661 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:19:30.661 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.661 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.661 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.661 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 Malloc7 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 Malloc8 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 Malloc9 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 Malloc10 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.662 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.920 Malloc11 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:19:30.920 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:30.921 12:30:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:33.453 12:31:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:19:33.453 12:31:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:19:33.453 12:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:33.453 12:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.453 12:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:33.453 12:31:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:35.377 12:31:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:37.270 12:31:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:19:37.527 12:31:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:37.527 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:37.527 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:37.527 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:37.527 12:31:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:39.427 12:31:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:19:39.685 12:31:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:39.685 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:39.685 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:39.685 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:39.685 12:31:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:41.584 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:41.584 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:41.584 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:19:41.585 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:41.585 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:41.585 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:41.585 12:31:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:41.585 12:31:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:41.884 12:31:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:41.884 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:41.884 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:41.884 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:41.884 12:31:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:43.860 12:31:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:45.771 12:31:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:45.771 12:31:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:45.771 12:31:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:19:46.029 12:31:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:46.029 12:31:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:46.029 12:31:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:46.029 12:31:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:46.029 12:31:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:46.029 12:31:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:46.029 12:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:46.029 12:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.029 12:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:46.029 12:31:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:48.558 12:31:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:50.454 12:31:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:52.351 12:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:52.608 12:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:52.609 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:52.609 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:52.609 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:52.609 12:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:54.525 12:31:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:54.525 [global] 00:19:54.525 thread=1 00:19:54.525 invalidate=1 00:19:54.525 rw=read 00:19:54.525 time_based=1 00:19:54.525 runtime=10 00:19:54.525 ioengine=libaio 00:19:54.525 direct=1 00:19:54.525 bs=262144 00:19:54.525 iodepth=64 00:19:54.525 norandommap=1 00:19:54.525 numjobs=1 00:19:54.525 00:19:54.525 [job0] 00:19:54.525 filename=/dev/nvme0n1 00:19:54.525 [job1] 00:19:54.525 filename=/dev/nvme10n1 00:19:54.525 [job2] 00:19:54.525 filename=/dev/nvme1n1 00:19:54.525 [job3] 00:19:54.525 filename=/dev/nvme2n1 00:19:54.525 [job4] 00:19:54.525 filename=/dev/nvme3n1 00:19:54.525 [job5] 00:19:54.525 filename=/dev/nvme4n1 00:19:54.525 [job6] 00:19:54.525 filename=/dev/nvme5n1 00:19:54.525 [job7] 00:19:54.525 filename=/dev/nvme6n1 00:19:54.525 [job8] 00:19:54.525 filename=/dev/nvme7n1 00:19:54.525 [job9] 00:19:54.525 filename=/dev/nvme8n1 00:19:54.525 [job10] 00:19:54.525 filename=/dev/nvme9n1 00:19:54.783 Could not set queue depth (nvme0n1) 00:19:54.783 Could not set queue depth (nvme10n1) 00:19:54.783 Could not set queue depth (nvme1n1) 00:19:54.783 Could not set queue depth (nvme2n1) 00:19:54.783 Could not set queue depth (nvme3n1) 00:19:54.783 Could not set queue depth (nvme4n1) 00:19:54.783 Could not set queue depth (nvme5n1) 00:19:54.783 Could not set queue depth (nvme6n1) 00:19:54.783 Could not set queue depth (nvme7n1) 00:19:54.783 Could not set queue depth (nvme8n1) 00:19:54.783 Could not set queue depth (nvme9n1) 00:19:54.783 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:54.783 fio-3.35 00:19:54.783 Starting 11 threads 00:20:06.982 00:20:06.982 job0: (groupid=0, jobs=1): err= 0: pid=87517: Fri Jul 12 12:31:34 2024 00:20:06.982 read: IOPS=719, BW=180MiB/s (189MB/s)(1815MiB/10089msec) 00:20:06.982 slat (usec): min=17, max=74402, avg=1359.88, stdev=3550.48 00:20:06.982 clat (msec): min=21, max=200, avg=87.46, stdev=22.03 00:20:06.982 lat (msec): min=21, max=206, avg=88.82, stdev=22.40 00:20:06.982 clat percentiles (msec): 00:20:06.982 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 63], 00:20:06.982 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 90], 00:20:06.982 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 120], 00:20:06.982 | 99.00th=[ 130], 99.50th=[ 150], 99.90th=[ 188], 99.95th=[ 197], 00:20:06.982 | 99.99th=[ 201] 00:20:06.982 bw ( KiB/s): min=129024, max=272896, per=8.47%, avg=184155.55, stdev=45653.88, samples=20 00:20:06.982 iops : min= 504, max= 1066, avg=719.20, stdev=178.46, samples=20 00:20:06.982 lat (msec) : 50=0.94%, 100=66.58%, 250=32.48% 00:20:06.982 cpu : usr=0.29%, sys=2.52%, ctx=1709, majf=0, minf=4097 00:20:06.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:06.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.982 issued rwts: total=7259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.982 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.982 job1: (groupid=0, jobs=1): err= 0: pid=87526: Fri Jul 12 12:31:34 2024 00:20:06.982 read: IOPS=663, BW=166MiB/s (174MB/s)(1674MiB/10093msec) 00:20:06.982 slat (usec): min=17, max=59704, avg=1480.29, stdev=3413.25 00:20:06.982 clat (msec): min=38, max=188, avg=94.72, stdev=16.29 00:20:06.982 lat (msec): min=38, max=188, avg=96.20, stdev=16.57 00:20:06.982 clat percentiles (msec): 00:20:06.982 | 1.00th=[ 57], 5.00th=[ 73], 10.00th=[ 80], 20.00th=[ 83], 00:20:06.982 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 96], 00:20:06.982 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 115], 95.00th=[ 118], 00:20:06.982 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 184], 99.95th=[ 184], 00:20:06.982 | 99.99th=[ 188] 00:20:06.982 bw ( KiB/s): min=123392, max=228352, per=7.81%, avg=169770.75, stdev=26821.13, samples=20 00:20:06.982 iops : min= 482, max= 892, avg=663.05, stdev=104.84, samples=20 00:20:06.982 lat (msec) : 50=0.22%, 100=64.09%, 250=35.69% 00:20:06.982 cpu : usr=0.29%, sys=2.30%, ctx=1589, majf=0, minf=4097 00:20:06.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:06.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.982 issued rwts: total=6697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.982 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.982 job2: (groupid=0, jobs=1): err= 0: pid=87527: Fri Jul 12 12:31:34 2024 00:20:06.982 read: IOPS=833, BW=208MiB/s (218MB/s)(2089MiB/10023msec) 00:20:06.982 slat (usec): min=17, max=38016, avg=1192.93, stdev=2687.33 00:20:06.982 clat (msec): min=20, max=151, avg=75.46, stdev=17.54 00:20:06.982 lat (msec): min=25, max=151, avg=76.65, stdev=17.81 00:20:06.982 clat percentiles (msec): 00:20:06.982 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:20:06.982 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 81], 60.00th=[ 84], 00:20:06.982 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 95], 95.00th=[ 109], 00:20:06.982 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 133], 99.95th=[ 136], 00:20:06.982 | 99.99th=[ 153] 00:20:06.982 bw ( KiB/s): min=139543, max=285184, per=9.76%, avg=212175.40, stdev=47101.48, samples=20 00:20:06.982 iops : min= 545, max= 1114, avg=828.65, stdev=183.96, samples=20 00:20:06.982 lat (msec) : 50=1.50%, 100=90.66%, 250=7.84% 00:20:06.982 cpu : usr=0.36%, sys=2.96%, ctx=1910, majf=0, minf=4097 00:20:06.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:06.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.982 issued rwts: total=8354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.982 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.982 job3: (groupid=0, jobs=1): err= 0: pid=87528: Fri Jul 12 12:31:34 2024 00:20:06.982 read: IOPS=777, BW=194MiB/s (204MB/s)(1960MiB/10087msec) 00:20:06.982 slat (usec): min=17, max=36541, avg=1270.96, stdev=2879.75 00:20:06.982 clat (msec): min=12, max=196, avg=80.92, stdev=22.06 00:20:06.982 lat (msec): min=13, max=196, avg=82.19, stdev=22.40 00:20:06.982 clat percentiles (msec): 00:20:06.982 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 61], 00:20:06.982 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 81], 60.00th=[ 87], 00:20:06.982 | 70.00th=[ 92], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 115], 00:20:06.982 | 99.00th=[ 124], 99.50th=[ 133], 99.90th=[ 190], 99.95th=[ 190], 00:20:06.982 | 99.99th=[ 197] 00:20:06.982 bw ( KiB/s): min=138752, max=269824, per=9.15%, avg=198939.90, stdev=50850.68, samples=20 00:20:06.982 iops : min= 542, max= 1054, avg=777.00, stdev=198.63, samples=20 00:20:06.982 lat (msec) : 20=0.20%, 50=1.29%, 100=74.85%, 250=23.65% 00:20:06.983 cpu : usr=0.46%, sys=2.76%, ctx=1845, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=7838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job4: (groupid=0, jobs=1): err= 0: pid=87529: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=639, BW=160MiB/s (168MB/s)(1611MiB/10079msec) 00:20:06.983 slat (usec): min=17, max=33249, avg=1525.01, stdev=3428.69 00:20:06.983 clat (msec): min=8, max=192, avg=98.41, stdev=21.86 00:20:06.983 lat (msec): min=8, max=207, avg=99.93, stdev=22.27 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 40], 5.00th=[ 60], 10.00th=[ 65], 20.00th=[ 81], 00:20:06.983 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 111], 00:20:06.983 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 118], 95.00th=[ 122], 00:20:06.983 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 192], 99.95th=[ 192], 00:20:06.983 | 99.99th=[ 192] 00:20:06.983 bw ( KiB/s): min=134144, max=253440, per=7.51%, avg=163355.00, stdev=34436.69, samples=20 00:20:06.983 iops : min= 524, max= 990, avg=638.00, stdev=134.53, samples=20 00:20:06.983 lat (msec) : 10=0.02%, 20=0.19%, 50=1.51%, 100=40.19%, 250=58.10% 00:20:06.983 cpu : usr=0.25%, sys=2.31%, ctx=1601, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=6442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job5: (groupid=0, jobs=1): err= 0: pid=87530: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=810, BW=203MiB/s (212MB/s)(2045MiB/10092msec) 00:20:06.983 slat (usec): min=19, max=41481, avg=1218.46, stdev=2785.56 00:20:06.983 clat (msec): min=16, max=201, avg=77.64, stdev=22.39 00:20:06.983 lat (msec): min=17, max=201, avg=78.86, stdev=22.71 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:20:06.983 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 81], 00:20:06.983 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 115], 00:20:06.983 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 201], 99.95th=[ 201], 00:20:06.983 | 99.99th=[ 203] 00:20:06.983 bw ( KiB/s): min=138986, max=274944, per=9.55%, avg=207715.45, stdev=53595.78, samples=20 00:20:06.983 iops : min= 542, max= 1074, avg=811.25, stdev=209.42, samples=20 00:20:06.983 lat (msec) : 20=0.07%, 50=1.99%, 100=75.77%, 250=22.16% 00:20:06.983 cpu : usr=0.31%, sys=3.21%, ctx=1815, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=8180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job6: (groupid=0, jobs=1): err= 0: pid=87531: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=774, BW=194MiB/s (203MB/s)(1939MiB/10011msec) 00:20:06.983 slat (usec): min=17, max=31063, avg=1265.49, stdev=3041.47 00:20:06.983 clat (msec): min=8, max=143, avg=81.27, stdev=25.61 00:20:06.983 lat (msec): min=8, max=146, avg=82.54, stdev=26.02 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 31], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:20:06.983 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 81], 60.00th=[ 89], 00:20:06.983 | 70.00th=[ 99], 80.00th=[ 113], 90.00th=[ 117], 95.00th=[ 120], 00:20:06.983 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 142], 99.95th=[ 142], 00:20:06.983 | 99.99th=[ 144] 00:20:06.983 bw ( KiB/s): min=135680, max=283136, per=8.87%, avg=192937.11, stdev=56731.78, samples=19 00:20:06.983 iops : min= 530, max= 1106, avg=753.58, stdev=221.67, samples=19 00:20:06.983 lat (msec) : 10=0.04%, 20=0.53%, 50=2.29%, 100=68.56%, 250=28.58% 00:20:06.983 cpu : usr=0.37%, sys=2.74%, ctx=1754, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=7757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job7: (groupid=0, jobs=1): err= 0: pid=87532: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=775, BW=194MiB/s (203MB/s)(1957MiB/10094msec) 00:20:06.983 slat (usec): min=21, max=39392, avg=1275.44, stdev=2881.69 00:20:06.983 clat (msec): min=18, max=201, avg=81.14, stdev=21.88 00:20:06.983 lat (msec): min=22, max=203, avg=82.42, stdev=22.21 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 61], 00:20:06.983 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 82], 60.00th=[ 88], 00:20:06.983 | 70.00th=[ 93], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 116], 00:20:06.983 | 99.00th=[ 125], 99.50th=[ 136], 99.90th=[ 184], 99.95th=[ 186], 00:20:06.983 | 99.99th=[ 203] 00:20:06.983 bw ( KiB/s): min=142848, max=270336, per=9.13%, avg=198651.45, stdev=50105.21, samples=20 00:20:06.983 iops : min= 558, max= 1056, avg=775.80, stdev=195.78, samples=20 00:20:06.983 lat (msec) : 20=0.01%, 50=1.38%, 100=74.85%, 250=23.75% 00:20:06.983 cpu : usr=0.38%, sys=3.18%, ctx=1809, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=7826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job8: (groupid=0, jobs=1): err= 0: pid=87533: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=590, BW=148MiB/s (155MB/s)(1491MiB/10092msec) 00:20:06.983 slat (usec): min=19, max=27079, avg=1626.35, stdev=3656.67 00:20:06.983 clat (msec): min=33, max=198, avg=106.49, stdev=15.42 00:20:06.983 lat (msec): min=34, max=198, avg=108.12, stdev=15.77 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 57], 5.00th=[ 83], 10.00th=[ 86], 20.00th=[ 92], 00:20:06.983 | 30.00th=[ 101], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 113], 00:20:06.983 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 124], 00:20:06.983 | 99.00th=[ 133], 99.50th=[ 157], 99.90th=[ 199], 99.95th=[ 199], 00:20:06.983 | 99.99th=[ 199] 00:20:06.983 bw ( KiB/s): min=135168, max=184832, per=6.94%, avg=150958.65, stdev=18620.54, samples=20 00:20:06.983 iops : min= 528, max= 722, avg=589.50, stdev=72.86, samples=20 00:20:06.983 lat (msec) : 50=0.60%, 100=29.22%, 250=70.18% 00:20:06.983 cpu : usr=0.23%, sys=2.36%, ctx=1500, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=5962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job9: (groupid=0, jobs=1): err= 0: pid=87534: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=810, BW=203MiB/s (213MB/s)(2043MiB/10077msec) 00:20:06.983 slat (usec): min=17, max=27404, avg=1219.91, stdev=2759.56 00:20:06.983 clat (msec): min=12, max=193, avg=77.64, stdev=22.42 00:20:06.983 lat (msec): min=12, max=193, avg=78.86, stdev=22.75 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:20:06.983 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 82], 00:20:06.983 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 114], 00:20:06.983 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 190], 99.95th=[ 190], 00:20:06.983 | 99.99th=[ 194] 00:20:06.983 bw ( KiB/s): min=138752, max=274981, per=9.55%, avg=207658.50, stdev=53585.82, samples=20 00:20:06.983 iops : min= 542, max= 1074, avg=811.10, stdev=209.34, samples=20 00:20:06.983 lat (msec) : 20=0.07%, 50=2.63%, 100=75.52%, 250=21.77% 00:20:06.983 cpu : usr=0.34%, sys=2.88%, ctx=1845, majf=0, minf=4097 00:20:06.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:06.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.983 issued rwts: total=8171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.983 job10: (groupid=0, jobs=1): err= 0: pid=87536: Fri Jul 12 12:31:34 2024 00:20:06.983 read: IOPS=1125, BW=281MiB/s (295MB/s)(2817MiB/10009msec) 00:20:06.983 slat (usec): min=17, max=37994, avg=883.53, stdev=2393.07 00:20:06.983 clat (msec): min=8, max=149, avg=55.83, stdev=31.73 00:20:06.983 lat (msec): min=10, max=154, avg=56.71, stdev=32.25 00:20:06.983 clat percentiles (msec): 00:20:06.983 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:20:06.983 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 58], 00:20:06.983 | 70.00th=[ 63], 80.00th=[ 74], 90.00th=[ 116], 95.00th=[ 120], 00:20:06.984 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 144], 00:20:06.984 | 99.99th=[ 150] 00:20:06.984 bw ( KiB/s): min=131072, max=507401, per=12.69%, avg=276035.11, stdev=149408.43, samples=19 00:20:06.984 iops : min= 512, max= 1982, avg=1078.21, stdev=583.67, samples=19 00:20:06.984 lat (msec) : 10=0.01%, 20=0.20%, 50=54.76%, 100=27.39%, 250=17.65% 00:20:06.984 cpu : usr=0.49%, sys=3.58%, ctx=2470, majf=0, minf=4097 00:20:06.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:06.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.984 issued rwts: total=11268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.984 00:20:06.984 Run status group 0 (all jobs): 00:20:06.984 READ: bw=2124MiB/s (2227MB/s), 148MiB/s-281MiB/s (155MB/s-295MB/s), io=20.9GiB (22.5GB), run=10009-10094msec 00:20:06.984 00:20:06.984 Disk stats (read/write): 00:20:06.984 nvme0n1: ios=14416/0, merge=0/0, ticks=1231319/0, in_queue=1231319, util=97.79% 00:20:06.984 nvme10n1: ios=13273/0, merge=0/0, ticks=1228086/0, in_queue=1228086, util=97.81% 00:20:06.984 nvme1n1: ios=16631/0, merge=0/0, ticks=1235719/0, in_queue=1235719, util=98.13% 00:20:06.984 nvme2n1: ios=15563/0, merge=0/0, ticks=1228866/0, in_queue=1228866, util=98.21% 00:20:06.984 nvme3n1: ios=12759/0, merge=0/0, ticks=1224250/0, in_queue=1224250, util=98.10% 00:20:06.984 nvme4n1: ios=16253/0, merge=0/0, ticks=1230581/0, in_queue=1230581, util=98.54% 00:20:06.984 nvme5n1: ios=15387/0, merge=0/0, ticks=1234265/0, in_queue=1234265, util=98.45% 00:20:06.984 nvme6n1: ios=15530/0, merge=0/0, ticks=1230085/0, in_queue=1230085, util=98.67% 00:20:06.984 nvme7n1: ios=11813/0, merge=0/0, ticks=1228537/0, in_queue=1228537, util=98.94% 00:20:06.984 nvme8n1: ios=16222/0, merge=0/0, ticks=1230001/0, in_queue=1230001, util=98.98% 00:20:06.984 nvme9n1: ios=21503/0, merge=0/0, ticks=1205008/0, in_queue=1205008, util=99.06% 00:20:06.984 12:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:20:06.984 [global] 00:20:06.984 thread=1 00:20:06.984 invalidate=1 00:20:06.984 rw=randwrite 00:20:06.984 time_based=1 00:20:06.984 runtime=10 00:20:06.984 ioengine=libaio 00:20:06.984 direct=1 00:20:06.984 bs=262144 00:20:06.984 iodepth=64 00:20:06.984 norandommap=1 00:20:06.984 numjobs=1 00:20:06.984 00:20:06.984 [job0] 00:20:06.984 filename=/dev/nvme0n1 00:20:06.984 [job1] 00:20:06.984 filename=/dev/nvme10n1 00:20:06.984 [job2] 00:20:06.984 filename=/dev/nvme1n1 00:20:06.984 [job3] 00:20:06.984 filename=/dev/nvme2n1 00:20:06.984 [job4] 00:20:06.984 filename=/dev/nvme3n1 00:20:06.984 [job5] 00:20:06.984 filename=/dev/nvme4n1 00:20:06.984 [job6] 00:20:06.984 filename=/dev/nvme5n1 00:20:06.984 [job7] 00:20:06.984 filename=/dev/nvme6n1 00:20:06.984 [job8] 00:20:06.984 filename=/dev/nvme7n1 00:20:06.984 [job9] 00:20:06.984 filename=/dev/nvme8n1 00:20:06.984 [job10] 00:20:06.984 filename=/dev/nvme9n1 00:20:06.984 Could not set queue depth (nvme0n1) 00:20:06.984 Could not set queue depth (nvme10n1) 00:20:06.984 Could not set queue depth (nvme1n1) 00:20:06.984 Could not set queue depth (nvme2n1) 00:20:06.984 Could not set queue depth (nvme3n1) 00:20:06.984 Could not set queue depth (nvme4n1) 00:20:06.984 Could not set queue depth (nvme5n1) 00:20:06.984 Could not set queue depth (nvme6n1) 00:20:06.984 Could not set queue depth (nvme7n1) 00:20:06.984 Could not set queue depth (nvme8n1) 00:20:06.984 Could not set queue depth (nvme9n1) 00:20:06.984 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:20:06.984 fio-3.35 00:20:06.984 Starting 11 threads 00:20:17.040 00:20:17.040 job0: (groupid=0, jobs=1): err= 0: pid=87734: Fri Jul 12 12:31:45 2024 00:20:17.040 write: IOPS=335, BW=84.0MiB/s (88.0MB/s)(856MiB/10195msec); 0 zone resets 00:20:17.040 slat (usec): min=22, max=49306, avg=2917.88, stdev=5182.62 00:20:17.040 clat (msec): min=57, max=376, avg=187.51, stdev=21.20 00:20:17.040 lat (msec): min=57, max=376, avg=190.43, stdev=20.81 00:20:17.040 clat percentiles (msec): 00:20:17.040 | 1.00th=[ 130], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 180], 00:20:17.040 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:20:17.040 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 207], 95.00th=[ 220], 00:20:17.040 | 99.00th=[ 266], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 376], 00:20:17.040 | 99.99th=[ 376] 00:20:17.040 bw ( KiB/s): min=71823, max=90112, per=6.00%, avg=86023.15, stdev=4482.69, samples=20 00:20:17.040 iops : min= 280, max= 352, avg=336.00, stdev=17.60, samples=20 00:20:17.040 lat (msec) : 100=0.70%, 250=98.07%, 500=1.23% 00:20:17.040 cpu : usr=0.75%, sys=1.00%, ctx=4969, majf=0, minf=1 00:20:17.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:20:17.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.040 issued rwts: total=0,3424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.040 job1: (groupid=0, jobs=1): err= 0: pid=87735: Fri Jul 12 12:31:45 2024 00:20:17.040 write: IOPS=330, BW=82.6MiB/s (86.6MB/s)(841MiB/10186msec); 0 zone resets 00:20:17.040 slat (usec): min=24, max=75836, avg=2966.76, stdev=5394.98 00:20:17.040 clat (msec): min=78, max=376, avg=190.72, stdev=21.02 00:20:17.040 lat (msec): min=78, max=376, avg=193.69, stdev=20.57 00:20:17.040 clat percentiles (msec): 00:20:17.040 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:20:17.040 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 188], 00:20:17.040 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 209], 95.00th=[ 230], 00:20:17.040 | 99.00th=[ 279], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 376], 00:20:17.040 | 99.99th=[ 376] 00:20:17.040 bw ( KiB/s): min=69632, max=90112, per=5.89%, avg=84505.60, stdev=5298.71, samples=20 00:20:17.040 iops : min= 272, max= 352, avg=330.10, stdev=20.70, samples=20 00:20:17.040 lat (msec) : 100=0.36%, 250=98.39%, 500=1.25% 00:20:17.040 cpu : usr=0.93%, sys=0.89%, ctx=4561, majf=0, minf=1 00:20:17.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:20:17.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.040 issued rwts: total=0,3364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.040 job2: (groupid=0, jobs=1): err= 0: pid=87747: Fri Jul 12 12:31:45 2024 00:20:17.040 write: IOPS=336, BW=84.2MiB/s (88.3MB/s)(858MiB/10190msec); 0 zone resets 00:20:17.040 slat (usec): min=21, max=91817, avg=2909.14, stdev=5307.97 00:20:17.040 clat (msec): min=15, max=377, avg=187.01, stdev=26.26 00:20:17.040 lat (msec): min=16, max=377, avg=189.92, stdev=26.07 00:20:17.040 clat percentiles (msec): 00:20:17.040 | 1.00th=[ 61], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 180], 00:20:17.040 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:20:17.040 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 207], 95.00th=[ 224], 00:20:17.040 | 99.00th=[ 268], 99.50th=[ 326], 99.90th=[ 368], 99.95th=[ 380], 00:20:17.040 | 99.99th=[ 380] 00:20:17.040 bw ( KiB/s): min=75927, max=90112, per=6.01%, avg=86253.95, stdev=3904.53, samples=20 00:20:17.040 iops : min= 296, max= 352, avg=336.90, stdev=15.33, samples=20 00:20:17.040 lat (msec) : 20=0.23%, 50=0.58%, 100=0.70%, 250=97.17%, 500=1.31% 00:20:17.040 cpu : usr=0.86%, sys=0.92%, ctx=4296, majf=0, minf=1 00:20:17.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:20:17.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.040 issued rwts: total=0,3432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.040 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.040 job3: (groupid=0, jobs=1): err= 0: pid=87748: Fri Jul 12 12:31:45 2024 00:20:17.040 write: IOPS=415, BW=104MiB/s (109MB/s)(1051MiB/10128msec); 0 zone resets 00:20:17.040 slat (usec): min=20, max=15107, avg=2368.20, stdev=4084.84 00:20:17.040 clat (msec): min=19, max=286, avg=151.65, stdev=19.20 00:20:17.040 lat (msec): min=22, max=286, avg=154.02, stdev=19.05 00:20:17.040 clat percentiles (msec): 00:20:17.040 | 1.00th=[ 100], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 148], 00:20:17.040 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:20:17.040 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 161], 95.00th=[ 167], 00:20:17.040 | 99.00th=[ 194], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 275], 00:20:17.040 | 99.99th=[ 288] 00:20:17.041 bw ( KiB/s): min=92160, max=135168, per=7.39%, avg=106035.20, stdev=9551.09, samples=20 00:20:17.041 iops : min= 360, max= 528, avg=414.20, stdev=37.31, samples=20 00:20:17.041 lat (msec) : 20=0.02%, 50=0.36%, 100=0.67%, 250=98.62%, 500=0.33% 00:20:17.041 cpu : usr=0.84%, sys=1.01%, ctx=5221, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,4205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job4: (groupid=0, jobs=1): err= 0: pid=87749: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=395, BW=99.0MiB/s (104MB/s)(1003MiB/10128msec); 0 zone resets 00:20:17.041 slat (usec): min=16, max=111895, avg=2436.70, stdev=4737.81 00:20:17.041 clat (msec): min=21, max=279, avg=159.14, stdev=25.17 00:20:17.041 lat (msec): min=23, max=279, avg=161.58, stdev=25.20 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 63], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 150], 00:20:17.041 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:20:17.041 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 180], 95.00th=[ 222], 00:20:17.041 | 99.00th=[ 234], 99.50th=[ 251], 99.90th=[ 271], 99.95th=[ 271], 00:20:17.041 | 99.99th=[ 279] 00:20:17.041 bw ( KiB/s): min=63488, max=128512, per=7.05%, avg=101043.20, stdev=12201.55, samples=20 00:20:17.041 iops : min= 248, max= 502, avg=394.70, stdev=47.66, samples=20 00:20:17.041 lat (msec) : 50=0.67%, 100=1.50%, 250=97.31%, 500=0.52% 00:20:17.041 cpu : usr=0.90%, sys=1.11%, ctx=4197, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,4010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job5: (groupid=0, jobs=1): err= 0: pid=87752: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=1132, BW=283MiB/s (297MB/s)(2845MiB/10053msec); 0 zone resets 00:20:17.041 slat (usec): min=15, max=6356, avg=874.38, stdev=1454.16 00:20:17.041 clat (msec): min=8, max=107, avg=55.64, stdev= 3.53 00:20:17.041 lat (msec): min=8, max=107, avg=56.51, stdev= 3.34 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 53], 20.00th=[ 54], 00:20:17.041 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 57], 00:20:17.041 | 70.00th=[ 57], 80.00th=[ 57], 90.00th=[ 58], 95.00th=[ 59], 00:20:17.041 | 99.00th=[ 64], 99.50th=[ 70], 99.90th=[ 96], 99.95th=[ 104], 00:20:17.041 | 99.99th=[ 108] 00:20:17.041 bw ( KiB/s): min=282112, max=295503, per=20.22%, avg=289918.85, stdev=3227.00, samples=20 00:20:17.041 iops : min= 1102, max= 1154, avg=1132.40, stdev=12.56, samples=20 00:20:17.041 lat (msec) : 10=0.04%, 20=0.11%, 50=0.25%, 100=99.55%, 250=0.05% 00:20:17.041 cpu : usr=2.04%, sys=2.55%, ctx=13953, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,11381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job6: (groupid=0, jobs=1): err= 0: pid=87753: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=416, BW=104MiB/s (109MB/s)(1056MiB/10145msec); 0 zone resets 00:20:17.041 slat (usec): min=22, max=14978, avg=2361.35, stdev=4067.44 00:20:17.041 clat (msec): min=8, max=300, avg=151.14, stdev=21.68 00:20:17.041 lat (msec): min=8, max=300, avg=153.51, stdev=21.61 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 66], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 148], 00:20:17.041 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:20:17.041 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 167], 00:20:17.041 | 99.00th=[ 197], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:20:17.041 | 99.99th=[ 300] 00:20:17.041 bw ( KiB/s): min=94208, max=137490, per=7.43%, avg=106597.65, stdev=10518.31, samples=20 00:20:17.041 iops : min= 368, max= 537, avg=416.15, stdev=41.14, samples=20 00:20:17.041 lat (msec) : 10=0.09%, 20=0.28%, 50=0.47%, 100=0.57%, 250=98.06% 00:20:17.041 lat (msec) : 500=0.52% 00:20:17.041 cpu : usr=1.01%, sys=1.23%, ctx=4554, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,4225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job7: (groupid=0, jobs=1): err= 0: pid=87754: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=1187, BW=297MiB/s (311MB/s)(2984MiB/10048msec); 0 zone resets 00:20:17.041 slat (usec): min=16, max=11667, avg=833.30, stdev=1385.98 00:20:17.041 clat (msec): min=14, max=101, avg=53.04, stdev= 3.83 00:20:17.041 lat (msec): min=14, max=101, avg=53.87, stdev= 3.64 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 50], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 51], 00:20:17.041 | 30.00th=[ 53], 40.00th=[ 53], 50.00th=[ 53], 60.00th=[ 54], 00:20:17.041 | 70.00th=[ 54], 80.00th=[ 54], 90.00th=[ 55], 95.00th=[ 58], 00:20:17.041 | 99.00th=[ 68], 99.50th=[ 74], 99.90th=[ 91], 99.95th=[ 97], 00:20:17.041 | 99.99th=[ 102] 00:20:17.041 bw ( KiB/s): min=278016, max=311296, per=21.19%, avg=303897.60, stdev=8915.39, samples=20 00:20:17.041 iops : min= 1086, max= 1216, avg=1187.10, stdev=34.83, samples=20 00:20:17.041 lat (msec) : 20=0.07%, 50=12.56%, 100=87.36%, 250=0.02% 00:20:17.041 cpu : usr=1.75%, sys=2.84%, ctx=15042, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,11934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job8: (groupid=0, jobs=1): err= 0: pid=87755: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=420, BW=105MiB/s (110MB/s)(1065MiB/10136msec); 0 zone resets 00:20:17.041 slat (usec): min=20, max=16873, avg=2327.19, stdev=4059.09 00:20:17.041 clat (msec): min=7, max=293, avg=149.93, stdev=23.72 00:20:17.041 lat (msec): min=7, max=293, avg=152.25, stdev=23.78 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 50], 5.00th=[ 113], 10.00th=[ 121], 20.00th=[ 148], 00:20:17.041 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:20:17.041 | 70.00th=[ 159], 80.00th=[ 159], 90.00th=[ 161], 95.00th=[ 167], 00:20:17.041 | 99.00th=[ 197], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:20:17.041 | 99.99th=[ 296] 00:20:17.041 bw ( KiB/s): min=92160, max=154624, per=7.49%, avg=107417.60, stdev=13660.45, samples=20 00:20:17.041 iops : min= 360, max= 604, avg=419.60, stdev=53.36, samples=20 00:20:17.041 lat (msec) : 10=0.05%, 20=0.19%, 50=0.77%, 100=2.04%, 250=96.53% 00:20:17.041 lat (msec) : 500=0.42% 00:20:17.041 cpu : usr=0.78%, sys=1.05%, ctx=5651, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,4259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job9: (groupid=0, jobs=1): err= 0: pid=87757: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=336, BW=84.1MiB/s (88.1MB/s)(857MiB/10195msec); 0 zone resets 00:20:17.041 slat (usec): min=21, max=45094, avg=2914.85, stdev=5185.91 00:20:17.041 clat (msec): min=31, max=378, avg=187.33, stdev=24.06 00:20:17.041 lat (msec): min=31, max=378, avg=190.25, stdev=23.80 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 88], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 180], 00:20:17.041 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:20:17.041 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 207], 95.00th=[ 224], 00:20:17.041 | 99.00th=[ 268], 99.50th=[ 326], 99.90th=[ 368], 99.95th=[ 380], 00:20:17.041 | 99.99th=[ 380] 00:20:17.041 bw ( KiB/s): min=73728, max=92160, per=6.01%, avg=86118.40, stdev=4279.19, samples=20 00:20:17.041 iops : min= 288, max= 360, avg=336.40, stdev=16.72, samples=20 00:20:17.041 lat (msec) : 50=0.47%, 100=0.70%, 250=97.61%, 500=1.23% 00:20:17.041 cpu : usr=0.58%, sys=0.92%, ctx=4018, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,3428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 job10: (groupid=0, jobs=1): err= 0: pid=87758: Fri Jul 12 12:31:45 2024 00:20:17.041 write: IOPS=338, BW=84.6MiB/s (88.7MB/s)(862MiB/10187msec); 0 zone resets 00:20:17.041 slat (usec): min=19, max=41892, avg=2874.39, stdev=5176.02 00:20:17.041 clat (msec): min=3, max=365, avg=186.11, stdev=28.93 00:20:17.041 lat (msec): min=4, max=379, avg=188.98, stdev=28.86 00:20:17.041 clat percentiles (msec): 00:20:17.041 | 1.00th=[ 27], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 180], 00:20:17.041 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:20:17.041 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 207], 95.00th=[ 224], 00:20:17.041 | 99.00th=[ 271], 99.50th=[ 317], 99.90th=[ 368], 99.95th=[ 368], 00:20:17.041 | 99.99th=[ 368] 00:20:17.041 bw ( KiB/s): min=73728, max=95232, per=6.04%, avg=86656.00, stdev=4263.93, samples=20 00:20:17.041 iops : min= 288, max= 372, avg=338.50, stdev=16.66, samples=20 00:20:17.041 lat (msec) : 4=0.03%, 10=0.26%, 20=0.41%, 50=0.90%, 100=0.32% 00:20:17.041 lat (msec) : 250=96.93%, 500=1.16% 00:20:17.041 cpu : usr=0.76%, sys=0.93%, ctx=1454, majf=0, minf=1 00:20:17.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:20:17.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:17.041 issued rwts: total=0,3448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.041 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:17.041 00:20:17.041 Run status group 0 (all jobs): 00:20:17.041 WRITE: bw=1400MiB/s (1468MB/s), 82.6MiB/s-297MiB/s (86.6MB/s-311MB/s), io=13.9GiB (15.0GB), run=10048-10195msec 00:20:17.041 00:20:17.041 Disk stats (read/write): 00:20:17.041 nvme0n1: ios=49/6709, merge=0/0, ticks=77/1210251, in_queue=1210328, util=97.99% 00:20:17.042 nvme10n1: ios=49/6595, merge=0/0, ticks=55/1209887, in_queue=1209942, util=98.09% 00:20:17.042 nvme1n1: ios=42/6733, merge=0/0, ticks=39/1210414, in_queue=1210453, util=98.16% 00:20:17.042 nvme2n1: ios=24/8265, merge=0/0, ticks=26/1209716, in_queue=1209742, util=97.88% 00:20:17.042 nvme3n1: ios=0/7866, merge=0/0, ticks=0/1210870, in_queue=1210870, util=97.85% 00:20:17.042 nvme4n1: ios=13/22636, merge=0/0, ticks=40/1219261, in_queue=1219301, util=98.45% 00:20:17.042 nvme5n1: ios=0/8328, merge=0/0, ticks=0/1213342, in_queue=1213342, util=98.43% 00:20:17.042 nvme6n1: ios=0/23719, merge=0/0, ticks=0/1218601, in_queue=1218601, util=98.47% 00:20:17.042 nvme7n1: ios=0/8385, merge=0/0, ticks=0/1212883, in_queue=1212883, util=98.69% 00:20:17.042 nvme8n1: ios=0/6720, merge=0/0, ticks=0/1210968, in_queue=1210968, util=98.81% 00:20:17.042 nvme9n1: ios=0/6755, merge=0/0, ticks=0/1209454, in_queue=1209454, util=98.80% 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:20:17.042 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.042 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:20:17.043 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:20:17.043 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:20:17.043 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.043 12:31:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.043 rmmod nvme_tcp 00:20:17.043 rmmod nvme_fabrics 00:20:17.043 rmmod nvme_keyring 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 87061 ']' 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 87061 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 87061 ']' 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 87061 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87061 00:20:17.043 killing process with pid 87061 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87061' 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 87061 00:20:17.043 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 87061 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:17.609 00:20:17.609 real 0m49.006s 00:20:17.609 user 2m40.795s 00:20:17.609 sys 0m34.584s 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.609 ************************************ 00:20:17.609 END TEST nvmf_multiconnection 00:20:17.609 ************************************ 00:20:17.609 12:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:17.609 12:31:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:17.609 12:31:46 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:17.609 12:31:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:17.609 12:31:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.609 12:31:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.609 ************************************ 00:20:17.609 START TEST nvmf_initiator_timeout 00:20:17.609 ************************************ 00:20:17.609 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:17.869 * Looking for test storage... 00:20:17.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:17.869 Cannot find device "nvmf_tgt_br" 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.869 Cannot find device "nvmf_tgt_br2" 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:17.869 Cannot find device "nvmf_tgt_br" 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:17.869 Cannot find device "nvmf_tgt_br2" 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.869 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:17.870 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.127 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.127 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.128 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:18.128 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:18.128 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.128 12:31:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:18.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:20:18.128 00:20:18.128 --- 10.0.0.2 ping statistics --- 00:20:18.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.128 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:18.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:18.128 00:20:18.128 --- 10.0.0.3 ping statistics --- 00:20:18.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.128 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:18.128 00:20:18.128 --- 10.0.0.1 ping statistics --- 00:20:18.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.128 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=88120 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 88120 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 88120 ']' 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.128 12:31:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.128 [2024-07-12 12:31:47.123726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:18.128 [2024-07-12 12:31:47.123820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.387 [2024-07-12 12:31:47.261166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.387 [2024-07-12 12:31:47.352646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.387 [2024-07-12 12:31:47.352707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.387 [2024-07-12 12:31:47.352721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.387 [2024-07-12 12:31:47.352731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.387 [2024-07-12 12:31:47.352740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.387 [2024-07-12 12:31:47.352900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.387 [2024-07-12 12:31:47.353167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.387 [2024-07-12 12:31:47.353590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.387 [2024-07-12 12:31:47.353624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.387 [2024-07-12 12:31:47.409519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 Malloc0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 Delay0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 [2024-07-12 12:31:48.196013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.362 [2024-07-12 12:31:48.224154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:19.362 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:19.363 12:31:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=88190 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:20:21.889 12:31:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:20:21.889 [global] 00:20:21.889 thread=1 00:20:21.889 invalidate=1 00:20:21.889 rw=write 00:20:21.889 time_based=1 00:20:21.889 runtime=60 00:20:21.889 ioengine=libaio 00:20:21.889 direct=1 00:20:21.889 bs=4096 00:20:21.889 iodepth=1 00:20:21.889 norandommap=0 00:20:21.889 numjobs=1 00:20:21.889 00:20:21.889 verify_dump=1 00:20:21.889 verify_backlog=512 00:20:21.889 verify_state_save=0 00:20:21.889 do_verify=1 00:20:21.889 verify=crc32c-intel 00:20:21.889 [job0] 00:20:21.889 filename=/dev/nvme0n1 00:20:21.889 Could not set queue depth (nvme0n1) 00:20:21.889 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:21.889 fio-3.35 00:20:21.889 Starting 1 thread 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 true 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 true 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 true 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:20:24.416 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.416 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:24.416 true 00:20:24.416 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.416 12:31:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.692 true 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.692 true 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.692 true 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:27.692 true 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:20:27.692 12:31:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 88190 00:21:23.909 00:21:23.909 job0: (groupid=0, jobs=1): err= 0: pid=88211: Fri Jul 12 12:32:50 2024 00:21:23.909 read: IOPS=788, BW=3154KiB/s (3230kB/s)(185MiB/60000msec) 00:21:23.909 slat (usec): min=11, max=10504, avg=15.20, stdev=60.14 00:21:23.909 clat (usec): min=159, max=40801k, avg=1071.26, stdev=187582.42 00:21:23.909 lat (usec): min=179, max=40801k, avg=1086.46, stdev=187582.41 00:21:23.909 clat percentiles (usec): 00:21:23.909 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:21:23.909 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:21:23.909 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:21:23.909 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 420], 99.95th=[ 775], 00:21:23.909 | 99.99th=[ 3294] 00:21:23.909 write: IOPS=793, BW=3174KiB/s (3251kB/s)(186MiB/60000msec); 0 zone resets 00:21:23.909 slat (usec): min=13, max=542, avg=21.16, stdev= 4.97 00:21:23.909 clat (usec): min=3, max=4093, avg=155.98, stdev=42.57 00:21:23.909 lat (usec): min=139, max=4126, avg=177.14, stdev=43.07 00:21:23.909 clat percentiles (usec): 00:21:23.909 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:21:23.909 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:21:23.909 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:21:23.909 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 314], 99.95th=[ 586], 00:21:23.909 | 99.99th=[ 2606] 00:21:23.909 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=9808.84, stdev=1802.06, samples=38 00:21:23.909 iops : min= 1024, max= 3072, avg=2452.21, stdev=450.52, samples=38 00:21:23.909 lat (usec) : 4=0.01%, 100=0.01%, 250=98.59%, 500=1.33%, 750=0.03% 00:21:23.909 lat (usec) : 1000=0.02% 00:21:23.909 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:21:23.909 cpu : usr=0.65%, sys=2.22%, ctx=95005, majf=0, minf=2 00:21:23.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.909 issued rwts: total=47309,47616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.909 00:21:23.909 Run status group 0 (all jobs): 00:21:23.909 READ: bw=3154KiB/s (3230kB/s), 3154KiB/s-3154KiB/s (3230kB/s-3230kB/s), io=185MiB (194MB), run=60000-60000msec 00:21:23.909 WRITE: bw=3174KiB/s (3251kB/s), 3174KiB/s-3174KiB/s (3251kB/s-3251kB/s), io=186MiB (195MB), run=60000-60000msec 00:21:23.909 00:21:23.909 Disk stats (read/write): 00:21:23.909 nvme0n1: ios=47337/47283, merge=0/0, ticks=10063/7752, in_queue=17815, util=99.61% 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:23.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:21:23.909 nvmf hotplug test: fio successful as expected 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.909 rmmod nvme_tcp 00:21:23.909 rmmod nvme_fabrics 00:21:23.909 rmmod nvme_keyring 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 88120 ']' 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 88120 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 88120 ']' 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 88120 00:21:23.909 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88120 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:23.910 killing process with pid 88120 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88120' 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 88120 00:21:23.910 12:32:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 88120 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:23.910 ************************************ 00:21:23.910 END TEST nvmf_initiator_timeout 00:21:23.910 ************************************ 00:21:23.910 00:21:23.910 real 1m4.454s 00:21:23.910 user 3m54.349s 00:21:23.910 sys 0m20.621s 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.910 12:32:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:23.910 12:32:51 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:21:23.910 12:32:51 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:23.910 12:32:51 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:23.910 12:32:51 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:21:23.910 12:32:51 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.910 12:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:23.910 ************************************ 00:21:23.910 START TEST nvmf_identify 00:21:23.910 ************************************ 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:23.910 * Looking for test storage... 00:21:23.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:23.910 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:23.911 Cannot find device "nvmf_tgt_br" 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:23.911 Cannot find device "nvmf_tgt_br2" 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:23.911 Cannot find device "nvmf_tgt_br" 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:23.911 Cannot find device "nvmf_tgt_br2" 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:23.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:23.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:23.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:23.911 00:21:23.911 --- 10.0.0.2 ping statistics --- 00:21:23.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.911 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:23.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:23.911 00:21:23.911 --- 10.0.0.3 ping statistics --- 00:21:23.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.911 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:23.911 00:21:23.911 --- 10.0.0.1 ping statistics --- 00:21:23.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.911 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=89042 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 89042 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 89042 ']' 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.911 12:32:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.911 [2024-07-12 12:32:51.665559] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:23.912 [2024-07-12 12:32:51.665636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.912 [2024-07-12 12:32:51.799403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.912 [2024-07-12 12:32:51.893520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.912 [2024-07-12 12:32:51.893573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.912 [2024-07-12 12:32:51.893585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.912 [2024-07-12 12:32:51.893593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.912 [2024-07-12 12:32:51.893601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.912 [2024-07-12 12:32:51.893739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.912 [2024-07-12 12:32:51.893968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.912 [2024-07-12 12:32:51.894453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.912 [2024-07-12 12:32:51.894463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.912 [2024-07-12 12:32:51.947654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 [2024-07-12 12:32:52.663708] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 Malloc0 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 [2024-07-12 12:32:52.763116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.912 [ 00:21:23.912 { 00:21:23.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:23.912 "subtype": "Discovery", 00:21:23.912 "listen_addresses": [ 00:21:23.912 { 00:21:23.912 "trtype": "TCP", 00:21:23.912 "adrfam": "IPv4", 00:21:23.912 "traddr": "10.0.0.2", 00:21:23.912 "trsvcid": "4420" 00:21:23.912 } 00:21:23.912 ], 00:21:23.912 "allow_any_host": true, 00:21:23.912 "hosts": [] 00:21:23.912 }, 00:21:23.912 { 00:21:23.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.912 "subtype": "NVMe", 00:21:23.912 "listen_addresses": [ 00:21:23.912 { 00:21:23.912 "trtype": "TCP", 00:21:23.912 "adrfam": "IPv4", 00:21:23.912 "traddr": "10.0.0.2", 00:21:23.912 "trsvcid": "4420" 00:21:23.912 } 00:21:23.912 ], 00:21:23.912 "allow_any_host": true, 00:21:23.912 "hosts": [], 00:21:23.912 "serial_number": "SPDK00000000000001", 00:21:23.912 "model_number": "SPDK bdev Controller", 00:21:23.912 "max_namespaces": 32, 00:21:23.912 "min_cntlid": 1, 00:21:23.912 "max_cntlid": 65519, 00:21:23.912 "namespaces": [ 00:21:23.912 { 00:21:23.912 "nsid": 1, 00:21:23.912 "bdev_name": "Malloc0", 00:21:23.912 "name": "Malloc0", 00:21:23.912 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:23.912 "eui64": "ABCDEF0123456789", 00:21:23.912 "uuid": "99019e5e-2df7-4289-9393-3595aebfad34" 00:21:23.912 } 00:21:23.912 ] 00:21:23.912 } 00:21:23.912 ] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.912 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:23.912 [2024-07-12 12:32:52.807119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:23.912 [2024-07-12 12:32:52.807166] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89077 ] 00:21:23.912 [2024-07-12 12:32:52.943267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:23.912 [2024-07-12 12:32:52.943377] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:23.912 [2024-07-12 12:32:52.943389] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:23.912 [2024-07-12 12:32:52.943407] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:23.912 [2024-07-12 12:32:52.943415] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:23.912 [2024-07-12 12:32:52.943550] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:23.912 [2024-07-12 12:32:52.943610] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1224e60 0 00:21:23.912 [2024-07-12 12:32:52.947807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:23.912 [2024-07-12 12:32:52.947834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:23.912 [2024-07-12 12:32:52.947840] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:23.912 [2024-07-12 12:32:52.947844] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:23.912 [2024-07-12 12:32:52.947899] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.912 [2024-07-12 12:32:52.947908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.947913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.947933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:23.913 [2024-07-12 12:32:52.947966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.955807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.955829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.955835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.955840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.955855] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:23.913 [2024-07-12 12:32:52.955864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:23.913 [2024-07-12 12:32:52.955871] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:23.913 [2024-07-12 12:32:52.955891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.955896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.955901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.955911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.913 [2024-07-12 12:32:52.955938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.956005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.956012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.956016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.956028] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:23.913 [2024-07-12 12:32:52.956036] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:23.913 [2024-07-12 12:32:52.956044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.956060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.913 [2024-07-12 12:32:52.956079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.956128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.956135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.956139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.956150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:23.913 [2024-07-12 12:32:52.956159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:23.913 [2024-07-12 12:32:52.956167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.956183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.913 [2024-07-12 12:32:52.956201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.956248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.956255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.956259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.956269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:23.913 [2024-07-12 12:32:52.956280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.956296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.913 [2024-07-12 12:32:52.956314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.956361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.956368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.956371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.956381] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:23.913 [2024-07-12 12:32:52.956387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:23.913 [2024-07-12 12:32:52.956395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:23.913 [2024-07-12 12:32:52.956500] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:23.913 [2024-07-12 12:32:52.956506] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:23.913 [2024-07-12 12:32:52.956516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956524] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.956532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.913 [2024-07-12 12:32:52.956550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.956598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.956605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.956609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.956619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:23.913 [2024-07-12 12:32:52.956630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.913 [2024-07-12 12:32:52.956646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.913 [2024-07-12 12:32:52.956663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.913 [2024-07-12 12:32:52.956707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.913 [2024-07-12 12:32:52.956714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.913 [2024-07-12 12:32:52.956718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.913 [2024-07-12 12:32:52.956728] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:23.913 [2024-07-12 12:32:52.956733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:23.913 [2024-07-12 12:32:52.956741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:23.913 [2024-07-12 12:32:52.956752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:23.913 [2024-07-12 12:32:52.956763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.913 [2024-07-12 12:32:52.956768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.956776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.914 [2024-07-12 12:32:52.956808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.914 [2024-07-12 12:32:52.956894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.914 [2024-07-12 12:32:52.956901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.914 [2024-07-12 12:32:52.956905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.956910] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224e60): datao=0, datal=4096, cccid=0 00:21:23.914 [2024-07-12 12:32:52.956915] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125d700) on tqpair(0x1224e60): expected_datao=0, payload_size=4096 00:21:23.914 [2024-07-12 12:32:52.956920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.956929] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.956934] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.956943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.914 [2024-07-12 12:32:52.956949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.914 [2024-07-12 12:32:52.956953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.956957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.914 [2024-07-12 12:32:52.956966] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:23.914 [2024-07-12 12:32:52.956971] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:23.914 [2024-07-12 12:32:52.956976] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:23.914 [2024-07-12 12:32:52.956982] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:23.914 [2024-07-12 12:32:52.956987] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:23.914 [2024-07-12 12:32:52.956992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:23.914 [2024-07-12 12:32:52.957001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:23.914 [2024-07-12 12:32:52.957009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.914 [2024-07-12 12:32:52.957046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.914 [2024-07-12 12:32:52.957104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.914 [2024-07-12 12:32:52.957111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.914 [2024-07-12 12:32:52.957116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.914 [2024-07-12 12:32:52.957129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.914 [2024-07-12 12:32:52.957151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.914 [2024-07-12 12:32:52.957171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.914 [2024-07-12 12:32:52.957191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.914 [2024-07-12 12:32:52.957210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:23.914 [2024-07-12 12:32:52.957232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:23.914 [2024-07-12 12:32:52.957241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.914 [2024-07-12 12:32:52.957274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d700, cid 0, qid 0 00:21:23.914 [2024-07-12 12:32:52.957281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125d880, cid 1, qid 0 00:21:23.914 [2024-07-12 12:32:52.957286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125da00, cid 2, qid 0 00:21:23.914 [2024-07-12 12:32:52.957291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.914 [2024-07-12 12:32:52.957296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125dd00, cid 4, qid 0 00:21:23.914 [2024-07-12 12:32:52.957380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.914 [2024-07-12 12:32:52.957386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.914 [2024-07-12 12:32:52.957390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125dd00) on tqpair=0x1224e60 00:21:23.914 [2024-07-12 12:32:52.957400] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:23.914 [2024-07-12 12:32:52.957410] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:23.914 [2024-07-12 12:32:52.957422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224e60) 00:21:23.914 [2024-07-12 12:32:52.957435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.914 [2024-07-12 12:32:52.957454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125dd00, cid 4, qid 0 00:21:23.914 [2024-07-12 12:32:52.957508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.914 [2024-07-12 12:32:52.957516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.914 [2024-07-12 12:32:52.957520] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957524] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224e60): datao=0, datal=4096, cccid=4 00:21:23.914 [2024-07-12 12:32:52.957529] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125dd00) on tqpair(0x1224e60): expected_datao=0, payload_size=4096 00:21:23.914 [2024-07-12 12:32:52.957533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957541] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957545] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.914 [2024-07-12 12:32:52.957554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.914 [2024-07-12 12:32:52.957560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.915 [2024-07-12 12:32:52.957564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125dd00) on tqpair=0x1224e60 00:21:23.915 [2024-07-12 12:32:52.957582] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:23.915 [2024-07-12 12:32:52.957626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224e60) 00:21:23.915 [2024-07-12 12:32:52.957645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.915 [2024-07-12 12:32:52.957653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1224e60) 00:21:23.915 [2024-07-12 12:32:52.957668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.915 [2024-07-12 12:32:52.957697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125dd00, cid 4, qid 0 00:21:23.915 [2024-07-12 12:32:52.957705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125de80, cid 5, qid 0 00:21:23.915 [2024-07-12 12:32:52.957835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.915 [2024-07-12 12:32:52.957844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.915 [2024-07-12 12:32:52.957848] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957852] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224e60): datao=0, datal=1024, cccid=4 00:21:23.915 [2024-07-12 12:32:52.957857] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125dd00) on tqpair(0x1224e60): expected_datao=0, payload_size=1024 00:21:23.915 [2024-07-12 12:32:52.957862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957869] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957873] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.915 [2024-07-12 12:32:52.957886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.915 [2024-07-12 12:32:52.957889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125de80) on tqpair=0x1224e60 00:21:23.915 [2024-07-12 12:32:52.957913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.915 [2024-07-12 12:32:52.957921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.915 [2024-07-12 12:32:52.957925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125dd00) on tqpair=0x1224e60 00:21:23.915 [2024-07-12 12:32:52.957942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.957947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224e60) 00:21:23.915 [2024-07-12 12:32:52.957955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.915 [2024-07-12 12:32:52.957980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125dd00, cid 4, qid 0 00:21:23.915 [2024-07-12 12:32:52.958047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.915 [2024-07-12 12:32:52.958054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.915 [2024-07-12 12:32:52.958058] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958062] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224e60): datao=0, datal=3072, cccid=4 00:21:23.915 [2024-07-12 12:32:52.958067] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125dd00) on tqpair(0x1224e60): expected_datao=0, payload_size=3072 00:21:23.915 [2024-07-12 12:32:52.958071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958079] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958083] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.915 [2024-07-12 12:32:52.958097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.915 [2024-07-12 12:32:52.958101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125dd00) on tqpair=0x1224e60 00:21:23.915 [2024-07-12 12:32:52.958116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1224e60) 00:21:23.915 [2024-07-12 12:32:52.958128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.915 [2024-07-12 12:32:52.958152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125dd00, cid 4, qid 0 00:21:23.915 [2024-07-12 12:32:52.958211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.915 [2024-07-12 12:32:52.958218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.915 [2024-07-12 12:32:52.958222] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.915 [2024-07-12 12:32:52.958226] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1224e60): datao=0, datal=8, cccid=4 00:21:23.915 ===================================================== 00:21:23.915 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:23.915 ===================================================== 00:21:23.915 Controller Capabilities/Features 00:21:23.915 ================================ 00:21:23.915 Vendor ID: 0000 00:21:23.915 Subsystem Vendor ID: 0000 00:21:23.915 Serial Number: .................... 00:21:23.915 Model Number: ........................................ 00:21:23.915 Firmware Version: 24.09 00:21:23.915 Recommended Arb Burst: 0 00:21:23.915 IEEE OUI Identifier: 00 00 00 00:21:23.915 Multi-path I/O 00:21:23.915 May have multiple subsystem ports: No 00:21:23.915 May have multiple controllers: No 00:21:23.915 Associated with SR-IOV VF: No 00:21:23.915 Max Data Transfer Size: 131072 00:21:23.915 Max Number of Namespaces: 0 00:21:23.915 Max Number of I/O Queues: 1024 00:21:23.915 NVMe Specification Version (VS): 1.3 00:21:23.915 NVMe Specification Version (Identify): 1.3 00:21:23.915 Maximum Queue Entries: 128 00:21:23.915 Contiguous Queues Required: Yes 00:21:23.915 Arbitration Mechanisms Supported 00:21:23.915 Weighted Round Robin: Not Supported 00:21:23.915 Vendor Specific: Not Supported 00:21:23.915 Reset Timeout: 15000 ms 00:21:23.915 Doorbell Stride: 4 bytes 00:21:23.915 NVM Subsystem Reset: Not Supported 00:21:23.915 Command Sets Supported 00:21:23.915 NVM Command Set: Supported 00:21:23.915 Boot Partition: Not Supported 00:21:23.915 Memory Page Size Minimum: 4096 bytes 00:21:23.915 Memory Page Size Maximum: 4096 bytes 00:21:23.915 Persistent Memory Region: Not Supported 00:21:23.915 Optional Asynchronous Events Supported 00:21:23.916 Namespace Attribute Notices: Not Supported 00:21:23.916 Firmware Activation Notices: Not Supported 00:21:23.916 ANA Change Notices: Not Supported 00:21:23.916 PLE Aggregate Log Change Notices: Not Supported 00:21:23.916 LBA Status Info Alert Notices: Not Supported 00:21:23.916 EGE Aggregate Log Change Notices: Not Supported 00:21:23.916 Normal NVM Subsystem Shutdown event: Not Supported 00:21:23.916 Zone Descriptor Change Notices: Not Supported 00:21:23.916 Discovery Log Change Notices: Supported 00:21:23.916 Controller Attributes 00:21:23.916 128-bit Host Identifier: Not Supported 00:21:23.916 Non-Operational Permissive Mode: Not Supported 00:21:23.916 NVM Sets: Not Supported 00:21:23.916 Read Recovery Levels: Not Supported 00:21:23.916 Endurance Groups: Not Supported 00:21:23.916 Predictable Latency Mode: Not Supported 00:21:23.916 Traffic Based Keep ALive: Not Supported 00:21:23.916 Namespace Granularity: Not Supported 00:21:23.916 SQ Associations: Not Supported 00:21:23.916 UUID List: Not Supported 00:21:23.916 Multi-Domain Subsystem: Not Supported 00:21:23.916 Fixed Capacity Management: Not Supported 00:21:23.916 Variable Capacity Management: Not Supported 00:21:23.916 Delete Endurance Group: Not Supported 00:21:23.916 Delete NVM Set: Not Supported 00:21:23.916 Extended LBA Formats Supported: Not Supported 00:21:23.916 Flexible Data Placement Supported: Not Supported 00:21:23.916 00:21:23.916 Controller Memory Buffer Support 00:21:23.916 ================================ 00:21:23.916 Supported: No 00:21:23.916 00:21:23.916 Persistent Memory Region Support 00:21:23.916 ================================ 00:21:23.916 Supported: No 00:21:23.916 00:21:23.916 Admin Command Set Attributes 00:21:23.916 ============================ 00:21:23.916 Security Send/Receive: Not Supported 00:21:23.916 Format NVM: Not Supported 00:21:23.916 Firmware Activate/Download: Not Supported 00:21:23.916 Namespace Management: Not Supported 00:21:23.916 Device Self-Test: Not Supported 00:21:23.916 Directives: Not Supported 00:21:23.916 NVMe-MI: Not Supported 00:21:23.916 Virtualization Management: Not Supported 00:21:23.916 Doorbell Buffer Config: Not Supported 00:21:23.916 Get LBA Status Capability: Not Supported 00:21:23.916 Command & Feature Lockdown Capability: Not Supported 00:21:23.916 Abort Command Limit: 1 00:21:23.916 Async Event Request Limit: 4 00:21:23.916 Number of Firmware Slots: N/A 00:21:23.916 Firmware Slot 1 Read-Only: N/A 00:21:23.916 Firm[2024-07-12 12:32:52.958231] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125dd00) on tqpair(0x1224e60): expected_datao=0, payload_size=8 00:21:23.916 [2024-07-12 12:32:52.958236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.916 [2024-07-12 12:32:52.958243] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.916 [2024-07-12 12:32:52.958247] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.916 [2024-07-12 12:32:52.958263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.916 [2024-07-12 12:32:52.958270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.916 [2024-07-12 12:32:52.958274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.916 [2024-07-12 12:32:52.958278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125dd00) on tqpair=0x1224e60 00:21:23.916 ware Activation Without Reset: N/A 00:21:23.916 Multiple Update Detection Support: N/A 00:21:23.916 Firmware Update Granularity: No Information Provided 00:21:23.916 Per-Namespace SMART Log: No 00:21:23.916 Asymmetric Namespace Access Log Page: Not Supported 00:21:23.916 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:23.916 Command Effects Log Page: Not Supported 00:21:23.916 Get Log Page Extended Data: Supported 00:21:23.916 Telemetry Log Pages: Not Supported 00:21:23.916 Persistent Event Log Pages: Not Supported 00:21:23.916 Supported Log Pages Log Page: May Support 00:21:23.916 Commands Supported & Effects Log Page: Not Supported 00:21:23.916 Feature Identifiers & Effects Log Page:May Support 00:21:23.916 NVMe-MI Commands & Effects Log Page: May Support 00:21:23.916 Data Area 4 for Telemetry Log: Not Supported 00:21:23.916 Error Log Page Entries Supported: 128 00:21:23.916 Keep Alive: Not Supported 00:21:23.916 00:21:23.916 NVM Command Set Attributes 00:21:23.916 ========================== 00:21:23.916 Submission Queue Entry Size 00:21:23.916 Max: 1 00:21:23.916 Min: 1 00:21:23.916 Completion Queue Entry Size 00:21:23.916 Max: 1 00:21:23.916 Min: 1 00:21:23.916 Number of Namespaces: 0 00:21:23.916 Compare Command: Not Supported 00:21:23.916 Write Uncorrectable Command: Not Supported 00:21:23.916 Dataset Management Command: Not Supported 00:21:23.916 Write Zeroes Command: Not Supported 00:21:23.916 Set Features Save Field: Not Supported 00:21:23.916 Reservations: Not Supported 00:21:23.916 Timestamp: Not Supported 00:21:23.916 Copy: Not Supported 00:21:23.916 Volatile Write Cache: Not Present 00:21:23.916 Atomic Write Unit (Normal): 1 00:21:23.916 Atomic Write Unit (PFail): 1 00:21:23.916 Atomic Compare & Write Unit: 1 00:21:23.916 Fused Compare & Write: Supported 00:21:23.916 Scatter-Gather List 00:21:23.916 SGL Command Set: Supported 00:21:23.916 SGL Keyed: Supported 00:21:23.916 SGL Bit Bucket Descriptor: Not Supported 00:21:23.916 SGL Metadata Pointer: Not Supported 00:21:23.916 Oversized SGL: Not Supported 00:21:23.916 SGL Metadata Address: Not Supported 00:21:23.916 SGL Offset: Supported 00:21:23.916 Transport SGL Data Block: Not Supported 00:21:23.916 Replay Protected Memory Block: Not Supported 00:21:23.916 00:21:23.916 Firmware Slot Information 00:21:23.916 ========================= 00:21:23.916 Active slot: 0 00:21:23.916 00:21:23.916 00:21:23.916 Error Log 00:21:23.916 ========= 00:21:23.916 00:21:23.916 Active Namespaces 00:21:23.916 ================= 00:21:23.916 Discovery Log Page 00:21:23.916 ================== 00:21:23.916 Generation Counter: 2 00:21:23.916 Number of Records: 2 00:21:23.916 Record Format: 0 00:21:23.916 00:21:23.916 Discovery Log Entry 0 00:21:23.916 ---------------------- 00:21:23.916 Transport Type: 3 (TCP) 00:21:23.916 Address Family: 1 (IPv4) 00:21:23.916 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:23.916 Entry Flags: 00:21:23.916 Duplicate Returned Information: 1 00:21:23.916 Explicit Persistent Connection Support for Discovery: 1 00:21:23.916 Transport Requirements: 00:21:23.916 Secure Channel: Not Required 00:21:23.916 Port ID: 0 (0x0000) 00:21:23.916 Controller ID: 65535 (0xffff) 00:21:23.916 Admin Max SQ Size: 128 00:21:23.916 Transport Service Identifier: 4420 00:21:23.916 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:23.916 Transport Address: 10.0.0.2 00:21:23.916 Discovery Log Entry 1 00:21:23.916 ---------------------- 00:21:23.917 Transport Type: 3 (TCP) 00:21:23.917 Address Family: 1 (IPv4) 00:21:23.917 Subsystem Type: 2 (NVM Subsystem) 00:21:23.917 Entry Flags: 00:21:23.917 Duplicate Returned Information: 0 00:21:23.917 Explicit Persistent Connection Support for Discovery: 0 00:21:23.917 Transport Requirements: 00:21:23.917 Secure Channel: Not Required 00:21:23.917 Port ID: 0 (0x0000) 00:21:23.917 Controller ID: 65535 (0xffff) 00:21:23.917 Admin Max SQ Size: 128 00:21:23.917 Transport Service Identifier: 4420 00:21:23.917 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:23.917 Transport Address: 10.0.0.2 [2024-07-12 12:32:52.958411] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:23.917 [2024-07-12 12:32:52.958433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d700) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.917 [2024-07-12 12:32:52.958447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125d880) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.917 [2024-07-12 12:32:52.958458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125da00) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.917 [2024-07-12 12:32:52.958468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.917 [2024-07-12 12:32:52.958483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.958500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.958527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.958584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.958591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.958595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.958624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.958646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.958717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.958724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.958728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958738] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:23.917 [2024-07-12 12:32:52.958743] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:23.917 [2024-07-12 12:32:52.958753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.958770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.958809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.958865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.958872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.958876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.958892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.958908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.958926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.958977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.958984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.958988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.958992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.959003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.959019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.959036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.959087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.959093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.959097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.959112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.959128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.959145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.959190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.959197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.959201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.959215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.959231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.917 [2024-07-12 12:32:52.959249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.917 [2024-07-12 12:32:52.959305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.917 [2024-07-12 12:32:52.959314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.917 [2024-07-12 12:32:52.959318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.917 [2024-07-12 12:32:52.959333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.917 [2024-07-12 12:32:52.959342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.917 [2024-07-12 12:32:52.959350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.918 [2024-07-12 12:32:52.959369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.918 [2024-07-12 12:32:52.959415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.918 [2024-07-12 12:32:52.959421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.918 [2024-07-12 12:32:52.959426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.918 [2024-07-12 12:32:52.959447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.918 [2024-07-12 12:32:52.959463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.918 [2024-07-12 12:32:52.959481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.918 [2024-07-12 12:32:52.959528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.918 [2024-07-12 12:32:52.959535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.918 [2024-07-12 12:32:52.959539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.918 [2024-07-12 12:32:52.959553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.918 [2024-07-12 12:32:52.959569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.918 [2024-07-12 12:32:52.959587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.918 [2024-07-12 12:32:52.959634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.918 [2024-07-12 12:32:52.959642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.918 [2024-07-12 12:32:52.959647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.918 [2024-07-12 12:32:52.959662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.918 [2024-07-12 12:32:52.959678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.918 [2024-07-12 12:32:52.959696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.918 [2024-07-12 12:32:52.959741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.918 [2024-07-12 12:32:52.959748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.918 [2024-07-12 12:32:52.959753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959757] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.918 [2024-07-12 12:32:52.959768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.959776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1224e60) 00:21:23.918 [2024-07-12 12:32:52.959784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.918 [2024-07-12 12:32:52.963831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125db80, cid 3, qid 0 00:21:23.918 [2024-07-12 12:32:52.963892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.918 [2024-07-12 12:32:52.963900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.918 [2024-07-12 12:32:52.963904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.918 [2024-07-12 12:32:52.963909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125db80) on tqpair=0x1224e60 00:21:23.918 [2024-07-12 12:32:52.963918] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:21:23.918 00:21:23.918 12:32:52 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:24.177 [2024-07-12 12:32:53.004117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:24.177 [2024-07-12 12:32:53.004164] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89079 ] 00:21:24.177 [2024-07-12 12:32:53.141029] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:24.177 [2024-07-12 12:32:53.141101] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.177 [2024-07-12 12:32:53.141110] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.177 [2024-07-12 12:32:53.141124] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.177 [2024-07-12 12:32:53.141132] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.177 [2024-07-12 12:32:53.141287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:24.177 [2024-07-12 12:32:53.141340] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa1ce60 0 00:21:24.177 [2024-07-12 12:32:53.156810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.177 [2024-07-12 12:32:53.156834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.177 [2024-07-12 12:32:53.156841] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.177 [2024-07-12 12:32:53.156845] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.177 [2024-07-12 12:32:53.156893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.177 [2024-07-12 12:32:53.156901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.177 [2024-07-12 12:32:53.156906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.177 [2024-07-12 12:32:53.156920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.177 [2024-07-12 12:32:53.156952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.177 [2024-07-12 12:32:53.164804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.177 [2024-07-12 12:32:53.164826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.177 [2024-07-12 12:32:53.164831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.177 [2024-07-12 12:32:53.164837] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.177 [2024-07-12 12:32:53.164848] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.177 [2024-07-12 12:32:53.164857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:24.177 [2024-07-12 12:32:53.164864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:24.177 [2024-07-12 12:32:53.164883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.177 [2024-07-12 12:32:53.164889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.177 [2024-07-12 12:32:53.164894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.177 [2024-07-12 12:32:53.164903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.177 [2024-07-12 12:32:53.164931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.177 [2024-07-12 12:32:53.164990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.177 [2024-07-12 12:32:53.164998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.177 [2024-07-12 12:32:53.165002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.177 [2024-07-12 12:32:53.165007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.177 [2024-07-12 12:32:53.165013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:24.178 [2024-07-12 12:32:53.165021] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:24.178 [2024-07-12 12:32:53.165030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.165047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.165066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.165120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.165127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.165131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.165143] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:24.178 [2024-07-12 12:32:53.165152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.178 [2024-07-12 12:32:53.165160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.165176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.165195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.165247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.165255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.165259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.165270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.178 [2024-07-12 12:32:53.165280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.165297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.165315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.165360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.165368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.165372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.165382] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.178 [2024-07-12 12:32:53.165387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.178 [2024-07-12 12:32:53.165396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.178 [2024-07-12 12:32:53.165502] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:24.178 [2024-07-12 12:32:53.165507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.178 [2024-07-12 12:32:53.165517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.165534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.165552] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.165597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.165604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.165608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.165618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.178 [2024-07-12 12:32:53.165629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.165646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.165663] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.165709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.165716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.165720] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.165730] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.178 [2024-07-12 12:32:53.165736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.165744] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:24.178 [2024-07-12 12:32:53.165756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.165766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.165779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.165813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.165904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.178 [2024-07-12 12:32:53.165911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.178 [2024-07-12 12:32:53.165916] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=4096, cccid=0 00:21:24.178 [2024-07-12 12:32:53.165926] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa55700) on tqpair(0xa1ce60): expected_datao=0, payload_size=4096 00:21:24.178 [2024-07-12 12:32:53.165931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165940] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165945] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.165961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.165965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.165969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.165978] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:24.178 [2024-07-12 12:32:53.165984] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:24.178 [2024-07-12 12:32:53.165989] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:24.178 [2024-07-12 12:32:53.165994] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:24.178 [2024-07-12 12:32:53.165999] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:24.178 [2024-07-12 12:32:53.166005] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.178 [2024-07-12 12:32:53.166061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.166115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.166122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.166127] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.166140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.178 [2024-07-12 12:32:53.166163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.178 [2024-07-12 12:32:53.166184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.178 [2024-07-12 12:32:53.166206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166210] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.178 [2024-07-12 12:32:53.166226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.166281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55700, cid 0, qid 0 00:21:24.178 [2024-07-12 12:32:53.166288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55880, cid 1, qid 0 00:21:24.178 [2024-07-12 12:32:53.166293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55a00, cid 2, qid 0 00:21:24.178 [2024-07-12 12:32:53.166299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.178 [2024-07-12 12:32:53.166304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.178 [2024-07-12 12:32:53.166393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.166401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.166405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.166415] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:24.178 [2024-07-12 12:32:53.166425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.178 [2024-07-12 12:32:53.166488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.178 [2024-07-12 12:32:53.166543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.166551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.166555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.166621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:24.178 [2024-07-12 12:32:53.166648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.178 [2024-07-12 12:32:53.166661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.178 [2024-07-12 12:32:53.166681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.178 [2024-07-12 12:32:53.166748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.178 [2024-07-12 12:32:53.166755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.178 [2024-07-12 12:32:53.166759] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166763] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=4096, cccid=4 00:21:24.178 [2024-07-12 12:32:53.166768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa55d00) on tqpair(0xa1ce60): expected_datao=0, payload_size=4096 00:21:24.178 [2024-07-12 12:32:53.166773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166781] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166798] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.178 [2024-07-12 12:32:53.166814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.178 [2024-07-12 12:32:53.166818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.178 [2024-07-12 12:32:53.166823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.178 [2024-07-12 12:32:53.166838] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:24.179 [2024-07-12 12:32:53.166850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.166862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.166871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.166876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.166884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.166905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.179 [2024-07-12 12:32:53.166980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.179 [2024-07-12 12:32:53.166988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.179 [2024-07-12 12:32:53.166992] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.166996] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=4096, cccid=4 00:21:24.179 [2024-07-12 12:32:53.167001] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa55d00) on tqpair(0xa1ce60): expected_datao=0, payload_size=4096 00:21:24.179 [2024-07-12 12:32:53.167006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167013] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167018] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167058] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.179 [2024-07-12 12:32:53.167173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.179 [2024-07-12 12:32:53.167180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.179 [2024-07-12 12:32:53.167184] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167188] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=4096, cccid=4 00:21:24.179 [2024-07-12 12:32:53.167193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa55d00) on tqpair(0xa1ce60): expected_datao=0, payload_size=4096 00:21:24.179 [2024-07-12 12:32:53.167198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167210] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167290] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:24.179 [2024-07-12 12:32:53.167295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:24.179 [2024-07-12 12:32:53.167311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:24.179 [2024-07-12 12:32:53.167338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.179 [2024-07-12 12:32:53.167401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.179 [2024-07-12 12:32:53.167409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55e80, cid 5, qid 0 00:21:24.179 [2024-07-12 12:32:53.167472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55e80) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55e80, cid 5, qid 0 00:21:24.179 [2024-07-12 12:32:53.167600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55e80) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55e80, cid 5, qid 0 00:21:24.179 [2024-07-12 12:32:53.167702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55e80) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55e80, cid 5, qid 0 00:21:24.179 [2024-07-12 12:32:53.167818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.167827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.167831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55e80) on tqpair=0xa1ce60 00:21:24.179 [2024-07-12 12:32:53.167855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.167925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa1ce60) 00:21:24.179 [2024-07-12 12:32:53.167932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.179 [2024-07-12 12:32:53.167954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55e80, cid 5, qid 0 00:21:24.179 [2024-07-12 12:32:53.167962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55d00, cid 4, qid 0 00:21:24.179 [2024-07-12 12:32:53.167967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa56000, cid 6, qid 0 00:21:24.179 [2024-07-12 12:32:53.167972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa56180, cid 7, qid 0 00:21:24.179 [2024-07-12 12:32:53.168123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.179 [2024-07-12 12:32:53.168131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.179 [2024-07-12 12:32:53.168135] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168139] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=8192, cccid=5 00:21:24.179 [2024-07-12 12:32:53.168144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa55e80) on tqpair(0xa1ce60): expected_datao=0, payload_size=8192 00:21:24.179 [2024-07-12 12:32:53.168149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168166] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168171] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.179 [2024-07-12 12:32:53.168184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.179 [2024-07-12 12:32:53.168188] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=512, cccid=4 00:21:24.179 [2024-07-12 12:32:53.168197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa55d00) on tqpair(0xa1ce60): expected_datao=0, payload_size=512 00:21:24.179 [2024-07-12 12:32:53.168202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168208] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168212] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.179 [2024-07-12 12:32:53.168224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.179 [2024-07-12 12:32:53.168228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168232] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=512, cccid=6 00:21:24.179 [2024-07-12 12:32:53.168237] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa56000) on tqpair(0xa1ce60): expected_datao=0, payload_size=512 00:21:24.179 [2024-07-12 12:32:53.168242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168248] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168252] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.179 [2024-07-12 12:32:53.168264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.179 [2024-07-12 12:32:53.168268] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168272] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa1ce60): datao=0, datal=4096, cccid=7 00:21:24.179 [2024-07-12 12:32:53.168277] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa56180) on tqpair(0xa1ce60): expected_datao=0, payload_size=4096 00:21:24.179 [2024-07-12 12:32:53.168282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168289] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168293] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.179 [2024-07-12 12:32:53.168308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.179 [2024-07-12 12:32:53.168312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.179 [2024-07-12 12:32:53.168316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55e80) on tqpair=0xa1ce60 00:21:24.179 ===================================================== 00:21:24.179 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.179 ===================================================== 00:21:24.179 Controller Capabilities/Features 00:21:24.179 ================================ 00:21:24.179 Vendor ID: 8086 00:21:24.179 Subsystem Vendor ID: 8086 00:21:24.179 Serial Number: SPDK00000000000001 00:21:24.179 Model Number: SPDK bdev Controller 00:21:24.179 Firmware Version: 24.09 00:21:24.179 Recommended Arb Burst: 6 00:21:24.179 IEEE OUI Identifier: e4 d2 5c 00:21:24.179 Multi-path I/O 00:21:24.179 May have multiple subsystem ports: Yes 00:21:24.179 May have multiple controllers: Yes 00:21:24.179 Associated with SR-IOV VF: No 00:21:24.179 Max Data Transfer Size: 131072 00:21:24.179 Max Number of Namespaces: 32 00:21:24.179 Max Number of I/O Queues: 127 00:21:24.179 NVMe Specification Version (VS): 1.3 00:21:24.179 NVMe Specification Version (Identify): 1.3 00:21:24.179 Maximum Queue Entries: 128 00:21:24.179 Contiguous Queues Required: Yes 00:21:24.179 Arbitration Mechanisms Supported 00:21:24.179 Weighted Round Robin: Not Supported 00:21:24.179 Vendor Specific: Not Supported 00:21:24.179 Reset Timeout: 15000 ms 00:21:24.179 Doorbell Stride: 4 bytes 00:21:24.179 NVM Subsystem Reset: Not Supported 00:21:24.179 Command Sets Supported 00:21:24.179 NVM Command Set: Supported 00:21:24.179 Boot Partition: Not Supported 00:21:24.179 Memory Page Size Minimum: 4096 bytes 00:21:24.179 Memory Page Size Maximum: 4096 bytes 00:21:24.179 Persistent Memory Region: Not Supported 00:21:24.179 Optional Asynchronous Events Supported 00:21:24.179 Namespace Attribute Notices: Supported 00:21:24.179 Firmware Activation Notices: Not Supported 00:21:24.179 ANA Change Notices: Not Supported 00:21:24.179 PLE Aggregate Log Change Notices: Not Supported 00:21:24.179 LBA Status Info Alert Notices: Not Supported 00:21:24.179 EGE Aggregate Log Change Notices: Not Supported 00:21:24.179 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.179 Zone Descriptor Change Notices: Not Supported 00:21:24.179 Discovery Log Change Notices: Not Supported 00:21:24.179 Controller Attributes 00:21:24.179 128-bit Host Identifier: Supported 00:21:24.179 Non-Operational Permissive Mode: Not Supported 00:21:24.179 NVM Sets: Not Supported 00:21:24.179 Read Recovery Levels: Not Supported 00:21:24.179 Endurance Groups: Not Supported 00:21:24.179 Predictable Latency Mode: Not Supported 00:21:24.179 Traffic Based Keep ALive: Not Supported 00:21:24.179 Namespace Granularity: Not Supported 00:21:24.179 SQ Associations: Not Supported 00:21:24.180 UUID List: Not Supported 00:21:24.180 Multi-Domain Subsystem: Not Supported 00:21:24.180 Fixed Capacity Management: Not Supported 00:21:24.180 Variable Capacity Management: Not Supported 00:21:24.180 Delete Endurance Group: Not Supported 00:21:24.180 Delete NVM Set: Not Supported 00:21:24.180 Extended LBA Formats Supported: Not Supported 00:21:24.180 Flexible Data Placement Supported: Not Supported 00:21:24.180 00:21:24.180 Controller Memory Buffer Support 00:21:24.180 ================================ 00:21:24.180 Supported: No 00:21:24.180 00:21:24.180 Persistent Memory Region Support 00:21:24.180 ================================ 00:21:24.180 Supported: No 00:21:24.180 00:21:24.180 Admin Command Set Attributes 00:21:24.180 ============================ 00:21:24.180 Security Send/Receive: Not Supported 00:21:24.180 Format NVM: Not Supported 00:21:24.180 Firmware Activate/Download: Not Supported 00:21:24.180 Namespace Management: Not Supported 00:21:24.180 Device Self-Test: Not Supported 00:21:24.180 Directives: Not Supported 00:21:24.180 NVMe-MI: Not Supported 00:21:24.180 Virtualization Management: Not Supported 00:21:24.180 Doorbell Buffer Config: Not Supported 00:21:24.180 Get LBA Status Capability: Not Supported 00:21:24.180 Command & Feature Lockdown Capability: Not Supported 00:21:24.180 Abort Command Limit: 4 00:21:24.180 Async Event Request Limit: 4 00:21:24.180 Number of Firmware Slots: N/A 00:21:24.180 Firmware Slot 1 Read-Only: N/A 00:21:24.180 Firmware Activation Without Reset: [2024-07-12 12:32:53.168334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.168342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.168346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55d00) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.168370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.168374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa56000) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.168392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.168396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa56180) on tqpair=0xa1ce60 00:21:24.180 N/A 00:21:24.180 Multiple Update Detection Support: N/A 00:21:24.180 Firmware Update Granularity: No Information Provided 00:21:24.180 Per-Namespace SMART Log: No 00:21:24.180 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.180 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:24.180 Command Effects Log Page: Supported 00:21:24.180 Get Log Page Extended Data: Supported 00:21:24.180 Telemetry Log Pages: Not Supported 00:21:24.180 Persistent Event Log Pages: Not Supported 00:21:24.180 Supported Log Pages Log Page: May Support 00:21:24.180 Commands Supported & Effects Log Page: Not Supported 00:21:24.180 Feature Identifiers & Effects Log Page:May Support 00:21:24.180 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.180 Data Area 4 for Telemetry Log: Not Supported 00:21:24.180 Error Log Page Entries Supported: 128 00:21:24.180 Keep Alive: Supported 00:21:24.180 Keep Alive Granularity: 10000 ms 00:21:24.180 00:21:24.180 NVM Command Set Attributes 00:21:24.180 ========================== 00:21:24.180 Submission Queue Entry Size 00:21:24.180 Max: 64 00:21:24.180 Min: 64 00:21:24.180 Completion Queue Entry Size 00:21:24.180 Max: 16 00:21:24.180 Min: 16 00:21:24.180 Number of Namespaces: 32 00:21:24.180 Compare Command: Supported 00:21:24.180 Write Uncorrectable Command: Not Supported 00:21:24.180 Dataset Management Command: Supported 00:21:24.180 Write Zeroes Command: Supported 00:21:24.180 Set Features Save Field: Not Supported 00:21:24.180 Reservations: Supported 00:21:24.180 Timestamp: Not Supported 00:21:24.180 Copy: Supported 00:21:24.180 Volatile Write Cache: Present 00:21:24.180 Atomic Write Unit (Normal): 1 00:21:24.180 Atomic Write Unit (PFail): 1 00:21:24.180 Atomic Compare & Write Unit: 1 00:21:24.180 Fused Compare & Write: Supported 00:21:24.180 Scatter-Gather List 00:21:24.180 SGL Command Set: Supported 00:21:24.180 SGL Keyed: Supported 00:21:24.180 SGL Bit Bucket Descriptor: Not Supported 00:21:24.180 SGL Metadata Pointer: Not Supported 00:21:24.180 Oversized SGL: Not Supported 00:21:24.180 SGL Metadata Address: Not Supported 00:21:24.180 SGL Offset: Supported 00:21:24.180 Transport SGL Data Block: Not Supported 00:21:24.180 Replay Protected Memory Block: Not Supported 00:21:24.180 00:21:24.180 Firmware Slot Information 00:21:24.180 ========================= 00:21:24.180 Active slot: 1 00:21:24.180 Slot 1 Firmware Revision: 24.09 00:21:24.180 00:21:24.180 00:21:24.180 Commands Supported and Effects 00:21:24.180 ============================== 00:21:24.180 Admin Commands 00:21:24.180 -------------- 00:21:24.180 Get Log Page (02h): Supported 00:21:24.180 Identify (06h): Supported 00:21:24.180 Abort (08h): Supported 00:21:24.180 Set Features (09h): Supported 00:21:24.180 Get Features (0Ah): Supported 00:21:24.180 Asynchronous Event Request (0Ch): Supported 00:21:24.180 Keep Alive (18h): Supported 00:21:24.180 I/O Commands 00:21:24.180 ------------ 00:21:24.180 Flush (00h): Supported LBA-Change 00:21:24.180 Write (01h): Supported LBA-Change 00:21:24.180 Read (02h): Supported 00:21:24.180 Compare (05h): Supported 00:21:24.180 Write Zeroes (08h): Supported LBA-Change 00:21:24.180 Dataset Management (09h): Supported LBA-Change 00:21:24.180 Copy (19h): Supported LBA-Change 00:21:24.180 00:21:24.180 Error Log 00:21:24.180 ========= 00:21:24.180 00:21:24.180 Arbitration 00:21:24.180 =========== 00:21:24.180 Arbitration Burst: 1 00:21:24.180 00:21:24.180 Power Management 00:21:24.180 ================ 00:21:24.180 Number of Power States: 1 00:21:24.180 Current Power State: Power State #0 00:21:24.180 Power State #0: 00:21:24.180 Max Power: 0.00 W 00:21:24.180 Non-Operational State: Operational 00:21:24.180 Entry Latency: Not Reported 00:21:24.180 Exit Latency: Not Reported 00:21:24.180 Relative Read Throughput: 0 00:21:24.180 Relative Read Latency: 0 00:21:24.180 Relative Write Throughput: 0 00:21:24.180 Relative Write Latency: 0 00:21:24.180 Idle Power: Not Reported 00:21:24.180 Active Power: Not Reported 00:21:24.180 Non-Operational Permissive Mode: Not Supported 00:21:24.180 00:21:24.180 Health Information 00:21:24.180 ================== 00:21:24.180 Critical Warnings: 00:21:24.180 Available Spare Space: OK 00:21:24.180 Temperature: OK 00:21:24.180 Device Reliability: OK 00:21:24.180 Read Only: No 00:21:24.180 Volatile Memory Backup: OK 00:21:24.180 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:24.180 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:24.180 Available Spare: 0% 00:21:24.180 Available Spare Threshold: 0% 00:21:24.180 Life Percentage Used:[2024-07-12 12:32:53.168510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa1ce60) 00:21:24.180 [2024-07-12 12:32:53.168526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.180 [2024-07-12 12:32:53.168549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa56180, cid 7, qid 0 00:21:24.180 [2024-07-12 12:32:53.168599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.168606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.168610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa56180) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168653] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:24.180 [2024-07-12 12:32:53.168665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55700) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.180 [2024-07-12 12:32:53.168678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55880) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.180 [2024-07-12 12:32:53.168688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55a00) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.180 [2024-07-12 12:32:53.168699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.168704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.180 [2024-07-12 12:32:53.168713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.168722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.180 [2024-07-12 12:32:53.168730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.180 [2024-07-12 12:32:53.168752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.180 [2024-07-12 12:32:53.172803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.172823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.172829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.172834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.172844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.172849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.172854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.180 [2024-07-12 12:32:53.172862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.180 [2024-07-12 12:32:53.172891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.180 [2024-07-12 12:32:53.172967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.172975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.172979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.172983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.172989] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:24.180 [2024-07-12 12:32:53.172995] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:24.180 [2024-07-12 12:32:53.173005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.180 [2024-07-12 12:32:53.173022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.180 [2024-07-12 12:32:53.173040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.180 [2024-07-12 12:32:53.173092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.173099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.173103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.173119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.180 [2024-07-12 12:32:53.173136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.180 [2024-07-12 12:32:53.173153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.180 [2024-07-12 12:32:53.173204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.173211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.173215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.173230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.180 [2024-07-12 12:32:53.173247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.180 [2024-07-12 12:32:53.173263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.180 [2024-07-12 12:32:53.173311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.180 [2024-07-12 12:32:53.173318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.180 [2024-07-12 12:32:53.173322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.180 [2024-07-12 12:32:53.173337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.180 [2024-07-12 12:32:53.173346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.173354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.173370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.173419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.173426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.173430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.173445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.173462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.173478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.173526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.173533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.173537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.173552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.173568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.173585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.173636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.173647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.173652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.173667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.173683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.173700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.173747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.173754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.173758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.173773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.173803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.173823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.173873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.173880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.173884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.173900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.173916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.173934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.173978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.173985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.173989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.173994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174728] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.174905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.174951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.174959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.174963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.174978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.174987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.174995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.175012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.175060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.175067] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.175071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.175086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175091] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.175103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.175120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.175164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.175172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.175176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.175191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.175207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.175224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.175275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.175282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.175286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.175310] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.175319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa1ce60) 00:21:24.181 [2024-07-12 12:32:53.175327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.181 [2024-07-12 12:32:53.178818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa55b80, cid 3, qid 0 00:21:24.181 [2024-07-12 12:32:53.178870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.181 [2024-07-12 12:32:53.178879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.181 [2024-07-12 12:32:53.178884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.181 [2024-07-12 12:32:53.178889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa55b80) on tqpair=0xa1ce60 00:21:24.181 [2024-07-12 12:32:53.178899] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:24.181 0% 00:21:24.181 Data Units Read: 0 00:21:24.181 Data Units Written: 0 00:21:24.181 Host Read Commands: 0 00:21:24.181 Host Write Commands: 0 00:21:24.181 Controller Busy Time: 0 minutes 00:21:24.181 Power Cycles: 0 00:21:24.181 Power On Hours: 0 hours 00:21:24.181 Unsafe Shutdowns: 0 00:21:24.181 Unrecoverable Media Errors: 0 00:21:24.181 Lifetime Error Log Entries: 0 00:21:24.181 Warning Temperature Time: 0 minutes 00:21:24.181 Critical Temperature Time: 0 minutes 00:21:24.181 00:21:24.181 Number of Queues 00:21:24.181 ================ 00:21:24.181 Number of I/O Submission Queues: 127 00:21:24.182 Number of I/O Completion Queues: 127 00:21:24.182 00:21:24.182 Active Namespaces 00:21:24.182 ================= 00:21:24.182 Namespace ID:1 00:21:24.182 Error Recovery Timeout: Unlimited 00:21:24.182 Command Set Identifier: NVM (00h) 00:21:24.182 Deallocate: Supported 00:21:24.182 Deallocated/Unwritten Error: Not Supported 00:21:24.182 Deallocated Read Value: Unknown 00:21:24.182 Deallocate in Write Zeroes: Not Supported 00:21:24.182 Deallocated Guard Field: 0xFFFF 00:21:24.182 Flush: Supported 00:21:24.182 Reservation: Supported 00:21:24.182 Namespace Sharing Capabilities: Multiple Controllers 00:21:24.182 Size (in LBAs): 131072 (0GiB) 00:21:24.182 Capacity (in LBAs): 131072 (0GiB) 00:21:24.182 Utilization (in LBAs): 131072 (0GiB) 00:21:24.182 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:24.182 EUI64: ABCDEF0123456789 00:21:24.182 UUID: 99019e5e-2df7-4289-9393-3595aebfad34 00:21:24.182 Thin Provisioning: Not Supported 00:21:24.182 Per-NS Atomic Units: Yes 00:21:24.182 Atomic Boundary Size (Normal): 0 00:21:24.182 Atomic Boundary Size (PFail): 0 00:21:24.182 Atomic Boundary Offset: 0 00:21:24.182 Maximum Single Source Range Length: 65535 00:21:24.182 Maximum Copy Length: 65535 00:21:24.182 Maximum Source Range Count: 1 00:21:24.182 NGUID/EUI64 Never Reused: No 00:21:24.182 Namespace Write Protected: No 00:21:24.182 Number of LBA Formats: 1 00:21:24.182 Current LBA Format: LBA Format #00 00:21:24.182 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:24.182 00:21:24.182 12:32:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:24.182 12:32:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.182 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.182 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.438 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.438 12:32:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:24.438 12:32:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:24.438 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:24.438 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.439 rmmod nvme_tcp 00:21:24.439 rmmod nvme_fabrics 00:21:24.439 rmmod nvme_keyring 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 89042 ']' 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 89042 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 89042 ']' 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 89042 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89042 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89042' 00:21:24.439 killing process with pid 89042 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 89042 00:21:24.439 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 89042 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:24.697 00:21:24.697 real 0m2.438s 00:21:24.697 user 0m6.939s 00:21:24.697 sys 0m0.610s 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:24.697 12:32:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.697 ************************************ 00:21:24.697 END TEST nvmf_identify 00:21:24.697 ************************************ 00:21:24.697 12:32:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:24.697 12:32:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:24.697 12:32:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:24.697 12:32:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:24.697 12:32:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:24.697 ************************************ 00:21:24.697 START TEST nvmf_perf 00:21:24.697 ************************************ 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:24.697 * Looking for test storage... 00:21:24.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:24.697 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:24.955 Cannot find device "nvmf_tgt_br" 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.955 Cannot find device "nvmf_tgt_br2" 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:24.955 Cannot find device "nvmf_tgt_br" 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:24.955 Cannot find device "nvmf_tgt_br2" 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:24.955 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:24.956 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:24.956 12:32:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:24.956 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:24.956 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:24.956 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:24.956 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:24.956 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:24.956 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:25.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:21:25.213 00:21:25.213 --- 10.0.0.2 ping statistics --- 00:21:25.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.213 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:25.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:25.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:25.213 00:21:25.213 --- 10.0.0.3 ping statistics --- 00:21:25.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.213 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:25.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:25.213 00:21:25.213 --- 10.0.0.1 ping statistics --- 00:21:25.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.213 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:25.213 12:32:54 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=89244 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 89244 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 89244 ']' 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.214 12:32:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:25.214 [2024-07-12 12:32:54.194295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:25.214 [2024-07-12 12:32:54.194388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.471 [2024-07-12 12:32:54.335588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.471 [2024-07-12 12:32:54.426627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.471 [2024-07-12 12:32:54.426680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.471 [2024-07-12 12:32:54.426692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.471 [2024-07-12 12:32:54.426701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.471 [2024-07-12 12:32:54.426708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.471 [2024-07-12 12:32:54.426893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.471 [2024-07-12 12:32:54.427034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.471 [2024-07-12 12:32:54.427766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.471 [2024-07-12 12:32:54.427810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.471 [2024-07-12 12:32:54.480536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:26.035 12:32:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.035 12:32:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:26.035 12:32:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.035 12:32:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.035 12:32:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:26.316 12:32:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.316 12:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:26.316 12:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:26.574 12:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:26.574 12:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:26.833 12:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:21:26.833 12:32:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:27.091 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:27.091 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:21:27.091 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:27.091 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:27.091 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:27.349 [2024-07-12 12:32:56.319834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.349 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.608 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:27.608 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.866 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:27.866 12:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:28.433 12:32:57 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.433 [2024-07-12 12:32:57.429574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.433 12:32:57 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:28.690 12:32:57 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:28.690 12:32:57 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:28.690 12:32:57 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:28.690 12:32:57 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:30.064 Initializing NVMe Controllers 00:21:30.064 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:30.064 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:30.064 Initialization complete. Launching workers. 00:21:30.064 ======================================================== 00:21:30.064 Latency(us) 00:21:30.064 Device Information : IOPS MiB/s Average min max 00:21:30.064 PCIE (0000:00:10.0) NSID 1 from core 0: 23951.90 93.56 1335.12 342.37 8959.33 00:21:30.064 ======================================================== 00:21:30.064 Total : 23951.90 93.56 1335.12 342.37 8959.33 00:21:30.064 00:21:30.064 12:32:58 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.998 Initializing NVMe Controllers 00:21:30.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:30.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:30.998 Initialization complete. Launching workers. 00:21:30.998 ======================================================== 00:21:30.998 Latency(us) 00:21:30.998 Device Information : IOPS MiB/s Average min max 00:21:30.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3755.88 14.67 264.90 105.01 4214.45 00:21:30.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.20 5070.50 12019.39 00:21:30.998 ======================================================== 00:21:30.998 Total : 3880.88 15.16 516.07 105.01 12019.39 00:21:30.998 00:21:31.256 12:33:00 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:32.630 Initializing NVMe Controllers 00:21:32.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:32.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:32.630 Initialization complete. Launching workers. 00:21:32.630 ======================================================== 00:21:32.630 Latency(us) 00:21:32.630 Device Information : IOPS MiB/s Average min max 00:21:32.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8706.41 34.01 3675.90 589.43 7655.88 00:21:32.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3982.46 15.56 8073.77 5994.65 15645.31 00:21:32.630 ======================================================== 00:21:32.630 Total : 12688.87 49.57 5056.19 589.43 15645.31 00:21:32.630 00:21:32.630 12:33:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:32.630 12:33:01 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:35.158 Initializing NVMe Controllers 00:21:35.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.158 Controller IO queue size 128, less than required. 00:21:35.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.158 Controller IO queue size 128, less than required. 00:21:35.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:35.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:35.158 Initialization complete. Launching workers. 00:21:35.158 ======================================================== 00:21:35.158 Latency(us) 00:21:35.158 Device Information : IOPS MiB/s Average min max 00:21:35.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1569.16 392.29 83608.10 48642.66 134062.63 00:21:35.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 676.91 169.23 192249.23 67202.57 314210.07 00:21:35.158 ======================================================== 00:21:35.158 Total : 2246.07 561.52 116349.92 48642.66 314210.07 00:21:35.158 00:21:35.158 12:33:04 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:35.417 Initializing NVMe Controllers 00:21:35.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.417 Controller IO queue size 128, less than required. 00:21:35.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.417 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:35.417 Controller IO queue size 128, less than required. 00:21:35.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.417 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:35.417 WARNING: Some requested NVMe devices were skipped 00:21:35.417 No valid NVMe controllers or AIO or URING devices found 00:21:35.417 12:33:04 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:37.944 Initializing NVMe Controllers 00:21:37.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.944 Controller IO queue size 128, less than required. 00:21:37.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.944 Controller IO queue size 128, less than required. 00:21:37.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:37.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:37.944 Initialization complete. Launching workers. 00:21:37.944 00:21:37.944 ==================== 00:21:37.944 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:37.944 TCP transport: 00:21:37.944 polls: 10595 00:21:37.944 idle_polls: 7169 00:21:37.944 sock_completions: 3426 00:21:37.944 nvme_completions: 5501 00:21:37.944 submitted_requests: 8216 00:21:37.944 queued_requests: 1 00:21:37.944 00:21:37.944 ==================== 00:21:37.944 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:37.944 TCP transport: 00:21:37.944 polls: 10754 00:21:37.944 idle_polls: 6342 00:21:37.944 sock_completions: 4412 00:21:37.944 nvme_completions: 6343 00:21:37.944 submitted_requests: 9462 00:21:37.944 queued_requests: 1 00:21:37.944 ======================================================== 00:21:37.944 Latency(us) 00:21:37.944 Device Information : IOPS MiB/s Average min max 00:21:37.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1371.95 342.99 95306.91 45222.83 159751.16 00:21:37.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1581.98 395.50 81443.20 40137.31 166369.40 00:21:37.944 ======================================================== 00:21:37.944 Total : 2953.93 738.48 87882.18 40137.31 166369.40 00:21:37.944 00:21:37.944 12:33:06 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:37.944 12:33:06 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.202 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:38.202 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:21:38.202 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=745c66ca-ec4b-4175-8871-ef558418c893 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 745c66ca-ec4b-4175-8871-ef558418c893 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=745c66ca-ec4b-4175-8871-ef558418c893 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:21:38.460 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:38.717 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:38.717 { 00:21:38.717 "uuid": "745c66ca-ec4b-4175-8871-ef558418c893", 00:21:38.717 "name": "lvs_0", 00:21:38.717 "base_bdev": "Nvme0n1", 00:21:38.717 "total_data_clusters": 1278, 00:21:38.717 "free_clusters": 1278, 00:21:38.717 "block_size": 4096, 00:21:38.717 "cluster_size": 4194304 00:21:38.717 } 00:21:38.717 ]' 00:21:38.717 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="745c66ca-ec4b-4175-8871-ef558418c893") .free_clusters' 00:21:38.717 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:21:38.717 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="745c66ca-ec4b-4175-8871-ef558418c893") .cluster_size' 00:21:38.975 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:38.975 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:21:38.975 12:33:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:21:38.975 5112 00:21:38.975 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:38.975 12:33:07 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 745c66ca-ec4b-4175-8871-ef558418c893 lbd_0 5112 00:21:39.233 12:33:08 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c0c317df-a4af-47e4-aa0f-02df895f622d 00:21:39.233 12:33:08 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore c0c317df-a4af-47e4-aa0f-02df895f622d lvs_n_0 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1b94888b-170d-49a1-b118-83a7ad08b75a 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1b94888b-170d-49a1-b118-83a7ad08b75a 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1b94888b-170d-49a1-b118-83a7ad08b75a 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:21:39.491 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:39.748 { 00:21:39.748 "uuid": "745c66ca-ec4b-4175-8871-ef558418c893", 00:21:39.748 "name": "lvs_0", 00:21:39.748 "base_bdev": "Nvme0n1", 00:21:39.748 "total_data_clusters": 1278, 00:21:39.748 "free_clusters": 0, 00:21:39.748 "block_size": 4096, 00:21:39.748 "cluster_size": 4194304 00:21:39.748 }, 00:21:39.748 { 00:21:39.748 "uuid": "1b94888b-170d-49a1-b118-83a7ad08b75a", 00:21:39.748 "name": "lvs_n_0", 00:21:39.748 "base_bdev": "c0c317df-a4af-47e4-aa0f-02df895f622d", 00:21:39.748 "total_data_clusters": 1276, 00:21:39.748 "free_clusters": 1276, 00:21:39.748 "block_size": 4096, 00:21:39.748 "cluster_size": 4194304 00:21:39.748 } 00:21:39.748 ]' 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1b94888b-170d-49a1-b118-83a7ad08b75a") .free_clusters' 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1b94888b-170d-49a1-b118-83a7ad08b75a") .cluster_size' 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:21:39.748 5104 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:39.748 12:33:08 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b94888b-170d-49a1-b118-83a7ad08b75a lbd_nest_0 5104 00:21:40.006 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=1b3962b1-607a-43b4-87ca-18ccff8811a7 00:21:40.006 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.264 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:40.264 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1b3962b1-607a-43b4-87ca-18ccff8811a7 00:21:40.521 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.779 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:40.779 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:40.779 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:40.779 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:40.779 12:33:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.036 Initializing NVMe Controllers 00:21:41.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.036 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:41.036 WARNING: Some requested NVMe devices were skipped 00:21:41.036 No valid NVMe controllers or AIO or URING devices found 00:21:41.036 12:33:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:41.036 12:33:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.261 Initializing NVMe Controllers 00:21:53.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:53.261 Initialization complete. Launching workers. 00:21:53.261 ======================================================== 00:21:53.261 Latency(us) 00:21:53.261 Device Information : IOPS MiB/s Average min max 00:21:53.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 998.10 124.76 1001.98 334.75 7594.26 00:21:53.261 ======================================================== 00:21:53.261 Total : 998.10 124.76 1001.98 334.75 7594.26 00:21:53.261 00:21:53.261 12:33:20 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:53.261 12:33:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:53.261 12:33:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:53.261 Initializing NVMe Controllers 00:21:53.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.261 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:53.261 WARNING: Some requested NVMe devices were skipped 00:21:53.261 No valid NVMe controllers or AIO or URING devices found 00:21:53.261 12:33:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:53.261 12:33:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:03.229 Initializing NVMe Controllers 00:22:03.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.229 Initialization complete. Launching workers. 00:22:03.229 ======================================================== 00:22:03.229 Latency(us) 00:22:03.229 Device Information : IOPS MiB/s Average min max 00:22:03.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1313.30 164.16 24379.76 7981.79 75640.88 00:22:03.229 ======================================================== 00:22:03.229 Total : 1313.30 164.16 24379.76 7981.79 75640.88 00:22:03.229 00:22:03.229 12:33:30 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:03.229 12:33:30 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:03.229 12:33:30 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:03.229 Initializing NVMe Controllers 00:22:03.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.229 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:22:03.229 WARNING: Some requested NVMe devices were skipped 00:22:03.229 No valid NVMe controllers or AIO or URING devices found 00:22:03.229 12:33:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:03.229 12:33:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.378 Initializing NVMe Controllers 00:22:13.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.378 Controller IO queue size 128, less than required. 00:22:13.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:13.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.379 Initialization complete. Launching workers. 00:22:13.379 ======================================================== 00:22:13.379 Latency(us) 00:22:13.379 Device Information : IOPS MiB/s Average min max 00:22:13.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3928.38 491.05 32629.81 12513.12 72347.60 00:22:13.379 ======================================================== 00:22:13.379 Total : 3928.38 491.05 32629.81 12513.12 72347.60 00:22:13.379 00:22:13.379 12:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:13.379 12:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1b3962b1-607a-43b4-87ca-18ccff8811a7 00:22:13.379 12:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:13.379 12:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c0c317df-a4af-47e4-aa0f-02df895f622d 00:22:13.637 12:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.895 rmmod nvme_tcp 00:22:13.895 rmmod nvme_fabrics 00:22:13.895 rmmod nvme_keyring 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 89244 ']' 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 89244 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 89244 ']' 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 89244 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.895 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89244 00:22:14.153 killing process with pid 89244 00:22:14.153 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:14.153 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:14.153 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89244' 00:22:14.153 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 89244 00:22:14.153 12:33:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 89244 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:15.527 ************************************ 00:22:15.527 END TEST nvmf_perf 00:22:15.527 ************************************ 00:22:15.527 00:22:15.527 real 0m50.789s 00:22:15.527 user 3m9.549s 00:22:15.527 sys 0m12.605s 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.527 12:33:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.527 12:33:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:15.527 12:33:44 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:15.527 12:33:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:15.527 12:33:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.527 12:33:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.527 ************************************ 00:22:15.527 START TEST nvmf_fio_host 00:22:15.527 ************************************ 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:15.527 * Looking for test storage... 00:22:15.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.527 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.785 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:15.786 Cannot find device "nvmf_tgt_br" 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:15.786 Cannot find device "nvmf_tgt_br2" 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:15.786 Cannot find device "nvmf_tgt_br" 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:15.786 Cannot find device "nvmf_tgt_br2" 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:15.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:15.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:15.786 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:16.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:16.044 00:22:16.044 --- 10.0.0.2 ping statistics --- 00:22:16.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.044 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:16.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:22:16.044 00:22:16.044 --- 10.0.0.3 ping statistics --- 00:22:16.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.044 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:16.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:16.044 00:22:16.044 --- 10.0.0.1 ping statistics --- 00:22:16.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.044 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.044 12:33:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=90054 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 90054 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 90054 ']' 00:22:16.044 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.045 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.045 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.045 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.045 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.045 [2024-07-12 12:33:45.062479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:16.045 [2024-07-12 12:33:45.062585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.302 [2024-07-12 12:33:45.197832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.303 [2024-07-12 12:33:45.291643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.303 [2024-07-12 12:33:45.291952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.303 [2024-07-12 12:33:45.292117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.303 [2024-07-12 12:33:45.292187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.303 [2024-07-12 12:33:45.292303] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.303 [2024-07-12 12:33:45.292515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.303 [2024-07-12 12:33:45.294827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.303 [2024-07-12 12:33:45.294932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.303 [2024-07-12 12:33:45.294937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.303 [2024-07-12 12:33:45.354168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:16.560 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.560 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:16.560 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:16.818 [2024-07-12 12:33:45.698457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.819 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:16.819 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.819 12:33:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.819 12:33:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:17.076 Malloc1 00:22:17.076 12:33:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.333 12:33:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.592 12:33:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.850 [2024-07-12 12:33:46.795165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.850 12:33:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:18.108 12:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:18.366 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:18.366 fio-3.35 00:22:18.366 Starting 1 thread 00:22:20.940 00:22:20.940 test: (groupid=0, jobs=1): err= 0: pid=90130: Fri Jul 12 12:33:49 2024 00:22:20.940 read: IOPS=8921, BW=34.9MiB/s (36.5MB/s)(69.9MiB/2007msec) 00:22:20.940 slat (usec): min=2, max=198, avg= 2.43, stdev= 1.99 00:22:20.940 clat (usec): min=1741, max=12739, avg=7459.23, stdev=557.19 00:22:20.940 lat (usec): min=1780, max=12741, avg=7461.66, stdev=557.10 00:22:20.940 clat percentiles (usec): 00:22:20.940 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:22:20.940 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:22:20.940 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8356], 00:22:20.940 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[11994], 00:22:20.940 | 99.99th=[12649] 00:22:20.940 bw ( KiB/s): min=33988, max=36552, per=99.96%, avg=35673.00, stdev=1158.53, samples=4 00:22:20.940 iops : min= 8497, max= 9138, avg=8918.25, stdev=289.63, samples=4 00:22:20.940 write: IOPS=8938, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2007msec); 0 zone resets 00:22:20.940 slat (usec): min=2, max=153, avg= 2.53, stdev= 1.37 00:22:20.940 clat (usec): min=1371, max=12803, avg=6809.73, stdev=523.83 00:22:20.940 lat (usec): min=1379, max=12805, avg=6812.27, stdev=523.84 00:22:20.940 clat percentiles (usec): 00:22:20.940 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6456], 00:22:20.940 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:22:20.940 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7635], 00:22:20.940 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[11338], 99.95th=[12125], 00:22:20.940 | 99.99th=[12649] 00:22:20.940 bw ( KiB/s): min=34818, max=36416, per=99.93%, avg=35728.50, stdev=667.01, samples=4 00:22:20.940 iops : min= 8704, max= 9104, avg=8932.00, stdev=166.98, samples=4 00:22:20.940 lat (msec) : 2=0.03%, 4=0.08%, 10=99.73%, 20=0.15% 00:22:20.940 cpu : usr=69.09%, sys=23.28%, ctx=29, majf=0, minf=7 00:22:20.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:20.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:20.940 issued rwts: total=17906,17939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:20.940 00:22:20.940 Run status group 0 (all jobs): 00:22:20.940 READ: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2007-2007msec 00:22:20.940 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2007-2007msec 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:20.940 12:33:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:20.940 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:20.940 fio-3.35 00:22:20.940 Starting 1 thread 00:22:23.468 00:22:23.468 test: (groupid=0, jobs=1): err= 0: pid=90173: Fri Jul 12 12:33:51 2024 00:22:23.468 read: IOPS=8377, BW=131MiB/s (137MB/s)(263MiB/2008msec) 00:22:23.468 slat (usec): min=3, max=115, avg= 3.66, stdev= 1.64 00:22:23.468 clat (usec): min=2203, max=17989, avg=8638.86, stdev=2648.75 00:22:23.468 lat (usec): min=2207, max=17993, avg=8642.52, stdev=2648.79 00:22:23.468 clat percentiles (usec): 00:22:23.468 | 1.00th=[ 3982], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6194], 00:22:23.468 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9241], 00:22:23.468 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[12125], 95.00th=[13566], 00:22:23.468 | 99.00th=[15533], 99.50th=[15926], 99.90th=[17695], 99.95th=[17957], 00:22:23.468 | 99.99th=[17957] 00:22:23.468 bw ( KiB/s): min=62592, max=73600, per=50.27%, avg=67384.00, stdev=5563.24, samples=4 00:22:23.468 iops : min= 3912, max= 4600, avg=4211.50, stdev=347.70, samples=4 00:22:23.468 write: IOPS=4808, BW=75.1MiB/s (78.8MB/s)(138MiB/1836msec); 0 zone resets 00:22:23.468 slat (usec): min=36, max=634, avg=37.96, stdev= 9.56 00:22:23.468 clat (usec): min=4976, max=20182, avg=11941.36, stdev=2257.24 00:22:23.468 lat (usec): min=5013, max=20219, avg=11979.32, stdev=2256.92 00:22:23.468 clat percentiles (usec): 00:22:23.468 | 1.00th=[ 7635], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:22:23.468 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:22:23.468 | 70.00th=[12911], 80.00th=[13829], 90.00th=[15008], 95.00th=[16188], 00:22:23.468 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19530], 99.95th=[19530], 00:22:23.468 | 99.99th=[20055] 00:22:23.468 bw ( KiB/s): min=64736, max=76128, per=91.01%, avg=70024.00, stdev=5755.34, samples=4 00:22:23.468 iops : min= 4046, max= 4758, avg=4376.50, stdev=359.71, samples=4 00:22:23.468 lat (msec) : 4=0.69%, 10=52.86%, 20=46.43%, 50=0.01% 00:22:23.468 cpu : usr=83.36%, sys=12.66%, ctx=5, majf=0, minf=3 00:22:23.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:23.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:23.468 issued rwts: total=16822,8829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:23.468 00:22:23.468 Run status group 0 (all jobs): 00:22:23.468 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2008-2008msec 00:22:23.468 WRITE: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=138MiB (145MB), run=1836-1836msec 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:23.468 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:23.469 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:23.469 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:23.469 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:22:23.725 Nvme0n1 00:22:23.725 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=452271a2-8a69-4372-b009-32a16a940408 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 452271a2-8a69-4372-b009-32a16a940408 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=452271a2-8a69-4372-b009-32a16a940408 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:22:23.982 12:33:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:24.239 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:24.239 { 00:22:24.240 "uuid": "452271a2-8a69-4372-b009-32a16a940408", 00:22:24.240 "name": "lvs_0", 00:22:24.240 "base_bdev": "Nvme0n1", 00:22:24.240 "total_data_clusters": 4, 00:22:24.240 "free_clusters": 4, 00:22:24.240 "block_size": 4096, 00:22:24.240 "cluster_size": 1073741824 00:22:24.240 } 00:22:24.240 ]' 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="452271a2-8a69-4372-b009-32a16a940408") .free_clusters' 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="452271a2-8a69-4372-b009-32a16a940408") .cluster_size' 00:22:24.240 4096 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:22:24.240 12:33:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:22:24.497 419562fd-38f0-49a5-b07c-1fee07f50e97 00:22:24.497 12:33:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:22:24.755 12:33:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:22:25.013 12:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:25.271 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:25.272 12:33:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:25.530 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:25.530 fio-3.35 00:22:25.530 Starting 1 thread 00:22:28.071 00:22:28.071 test: (groupid=0, jobs=1): err= 0: pid=90282: Fri Jul 12 12:33:56 2024 00:22:28.071 read: IOPS=6359, BW=24.8MiB/s (26.0MB/s)(49.9MiB/2008msec) 00:22:28.071 slat (usec): min=2, max=328, avg= 2.56, stdev= 3.71 00:22:28.071 clat (usec): min=2934, max=19414, avg=10519.69, stdev=883.72 00:22:28.071 lat (usec): min=2943, max=19416, avg=10522.25, stdev=883.40 00:22:28.071 clat percentiles (usec): 00:22:28.071 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:22:28.071 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:22:28.071 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:22:28.071 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16909], 99.95th=[18482], 00:22:28.071 | 99.99th=[19268] 00:22:28.071 bw ( KiB/s): min=24552, max=26152, per=99.81%, avg=25389.50, stdev=765.09, samples=4 00:22:28.071 iops : min= 6138, max= 6538, avg=6347.25, stdev=191.37, samples=4 00:22:28.071 write: IOPS=6356, BW=24.8MiB/s (26.0MB/s)(49.9MiB/2008msec); 0 zone resets 00:22:28.071 slat (usec): min=2, max=259, avg= 2.66, stdev= 2.54 00:22:28.071 clat (usec): min=2427, max=18188, avg=9529.11, stdev=826.47 00:22:28.071 lat (usec): min=2441, max=18190, avg=9531.76, stdev=826.33 00:22:28.071 clat percentiles (usec): 00:22:28.071 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:22:28.071 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:22:28.071 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10814], 00:22:28.071 | 99.00th=[11338], 99.50th=[11600], 99.90th=[15664], 99.95th=[16909], 00:22:28.071 | 99.99th=[18220] 00:22:28.071 bw ( KiB/s): min=25101, max=25720, per=99.88%, avg=25397.25, stdev=281.95, samples=4 00:22:28.071 iops : min= 6275, max= 6430, avg=6349.25, stdev=70.58, samples=4 00:22:28.071 lat (msec) : 4=0.06%, 10=49.85%, 20=50.08% 00:22:28.071 cpu : usr=71.20%, sys=23.07%, ctx=27, majf=0, minf=7 00:22:28.071 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:28.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:28.071 issued rwts: total=12769,12764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:28.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:28.071 00:22:28.071 Run status group 0 (all jobs): 00:22:28.071 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.9MiB (52.3MB), run=2008-2008msec 00:22:28.071 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.9MiB (52.3MB), run=2008-2008msec 00:22:28.071 12:33:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:28.071 12:33:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2c8680e6-3706-4fea-89dd-ba6b9617d7b9 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2c8680e6-3706-4fea-89dd-ba6b9617d7b9 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=2c8680e6-3706-4fea-89dd-ba6b9617d7b9 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:22:28.329 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:28.588 { 00:22:28.588 "uuid": "452271a2-8a69-4372-b009-32a16a940408", 00:22:28.588 "name": "lvs_0", 00:22:28.588 "base_bdev": "Nvme0n1", 00:22:28.588 "total_data_clusters": 4, 00:22:28.588 "free_clusters": 0, 00:22:28.588 "block_size": 4096, 00:22:28.588 "cluster_size": 1073741824 00:22:28.588 }, 00:22:28.588 { 00:22:28.588 "uuid": "2c8680e6-3706-4fea-89dd-ba6b9617d7b9", 00:22:28.588 "name": "lvs_n_0", 00:22:28.588 "base_bdev": "419562fd-38f0-49a5-b07c-1fee07f50e97", 00:22:28.588 "total_data_clusters": 1022, 00:22:28.588 "free_clusters": 1022, 00:22:28.588 "block_size": 4096, 00:22:28.588 "cluster_size": 4194304 00:22:28.588 } 00:22:28.588 ]' 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2c8680e6-3706-4fea-89dd-ba6b9617d7b9") .free_clusters' 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2c8680e6-3706-4fea-89dd-ba6b9617d7b9") .cluster_size' 00:22:28.588 4088 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:22:28.588 12:33:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:22:28.846 d0480e09-084e-4de5-affd-f08193a9c7c3 00:22:28.846 12:33:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:22:29.105 12:33:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:22:29.363 12:33:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:29.929 12:33:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.929 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.929 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:29.930 12:33:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:29.930 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:29.930 fio-3.35 00:22:29.930 Starting 1 thread 00:22:32.460 00:22:32.460 test: (groupid=0, jobs=1): err= 0: pid=90360: Fri Jul 12 12:34:01 2024 00:22:32.460 read: IOPS=5552, BW=21.7MiB/s (22.7MB/s)(43.6MiB/2010msec) 00:22:32.460 slat (usec): min=2, max=331, avg= 2.69, stdev= 3.96 00:22:32.460 clat (usec): min=3424, max=20393, avg=12086.29, stdev=1021.38 00:22:32.460 lat (usec): min=3434, max=20396, avg=12088.98, stdev=1021.07 00:22:32.460 clat percentiles (usec): 00:22:32.460 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:22:32.460 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:22:32.460 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:22:32.460 | 99.00th=[14353], 99.50th=[14746], 99.90th=[19792], 99.95th=[20055], 00:22:32.460 | 99.99th=[20317] 00:22:32.460 bw ( KiB/s): min=20992, max=23008, per=99.98%, avg=22206.00, stdev=869.23, samples=4 00:22:32.460 iops : min= 5248, max= 5752, avg=5551.50, stdev=217.31, samples=4 00:22:32.460 write: IOPS=5523, BW=21.6MiB/s (22.6MB/s)(43.4MiB/2010msec); 0 zone resets 00:22:32.460 slat (usec): min=2, max=273, avg= 2.80, stdev= 2.88 00:22:32.460 clat (usec): min=2457, max=19967, avg=10930.28, stdev=969.37 00:22:32.460 lat (usec): min=2471, max=19970, avg=10933.08, stdev=969.28 00:22:32.460 clat percentiles (usec): 00:22:32.460 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:22:32.460 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:22:32.460 | 70.00th=[11338], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:22:32.460 | 99.00th=[13042], 99.50th=[13435], 99.90th=[18482], 99.95th=[18744], 00:22:32.460 | 99.99th=[19006] 00:22:32.460 bw ( KiB/s): min=21888, max=22464, per=99.88%, avg=22066.00, stdev=267.28, samples=4 00:22:32.460 iops : min= 5472, max= 5616, avg=5516.50, stdev=66.82, samples=4 00:22:32.460 lat (msec) : 4=0.04%, 10=7.45%, 20=92.47%, 50=0.04% 00:22:32.460 cpu : usr=72.67%, sys=21.80%, ctx=2, majf=0, minf=7 00:22:32.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:32.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.460 issued rwts: total=11161,11102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.460 00:22:32.460 Run status group 0 (all jobs): 00:22:32.460 READ: bw=21.7MiB/s (22.7MB/s), 21.7MiB/s-21.7MiB/s (22.7MB/s-22.7MB/s), io=43.6MiB (45.7MB), run=2010-2010msec 00:22:32.460 WRITE: bw=21.6MiB/s (22.6MB/s), 21.6MiB/s-21.6MiB/s (22.6MB/s-22.6MB/s), io=43.4MiB (45.5MB), run=2010-2010msec 00:22:32.460 12:34:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:32.460 12:34:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:22:32.460 12:34:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:32.719 12:34:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:32.977 12:34:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:33.257 12:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:33.516 12:34:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.454 rmmod nvme_tcp 00:22:34.454 rmmod nvme_fabrics 00:22:34.454 rmmod nvme_keyring 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 90054 ']' 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 90054 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 90054 ']' 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 90054 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.454 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90054 00:22:34.712 killing process with pid 90054 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90054' 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 90054 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 90054 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.712 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.971 12:34:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:34.971 00:22:34.971 real 0m19.303s 00:22:34.971 user 1m24.584s 00:22:34.971 sys 0m4.453s 00:22:34.971 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.971 12:34:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.971 ************************************ 00:22:34.971 END TEST nvmf_fio_host 00:22:34.971 ************************************ 00:22:34.971 12:34:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:34.971 12:34:03 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:34.971 12:34:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.971 12:34:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.971 12:34:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.971 ************************************ 00:22:34.971 START TEST nvmf_failover 00:22:34.971 ************************************ 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:34.971 * Looking for test storage... 00:22:34.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:34.971 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:34.972 Cannot find device "nvmf_tgt_br" 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:22:34.972 12:34:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.972 Cannot find device "nvmf_tgt_br2" 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:34.972 Cannot find device "nvmf_tgt_br" 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:34.972 Cannot find device "nvmf_tgt_br2" 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:22:34.972 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:35.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:22:35.230 00:22:35.230 --- 10.0.0.2 ping statistics --- 00:22:35.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.230 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:35.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:22:35.230 00:22:35.230 --- 10.0.0.3 ping statistics --- 00:22:35.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.230 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:35.230 00:22:35.230 --- 10.0.0.1 ping statistics --- 00:22:35.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.230 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.230 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=90598 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 90598 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 90598 ']' 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.231 12:34:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.489 [2024-07-12 12:34:04.345541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:35.489 [2024-07-12 12:34:04.345855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.489 [2024-07-12 12:34:04.487762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:35.747 [2024-07-12 12:34:04.580770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.747 [2024-07-12 12:34:04.581003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.747 [2024-07-12 12:34:04.581400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.747 [2024-07-12 12:34:04.581530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.747 [2024-07-12 12:34:04.581741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.747 [2024-07-12 12:34:04.582145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.747 [2024-07-12 12:34:04.582214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.747 [2024-07-12 12:34:04.582216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.748 [2024-07-12 12:34:04.637055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:36.313 12:34:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.313 12:34:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:36.313 12:34:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.313 12:34:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.313 12:34:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.571 12:34:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.571 12:34:05 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:36.829 [2024-07-12 12:34:05.678983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.829 12:34:05 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:37.087 Malloc0 00:22:37.087 12:34:06 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.344 12:34:06 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.603 12:34:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.860 [2024-07-12 12:34:06.775748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.860 12:34:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:38.118 [2024-07-12 12:34:07.011937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:38.118 12:34:07 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:38.376 [2024-07-12 12:34:07.248134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90660 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90660 /var/tmp/bdevperf.sock 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 90660 ']' 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.376 12:34:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.310 12:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.310 12:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:39.310 12:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:39.568 NVMe0n1 00:22:39.568 12:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:39.827 00:22:39.827 12:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90685 00:22:39.827 12:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.827 12:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:41.201 12:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.201 12:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:44.482 12:34:13 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.482 00:22:44.482 12:34:13 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:44.738 12:34:13 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:48.019 12:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.019 [2024-07-12 12:34:16.981092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.019 12:34:17 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:48.956 12:34:18 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:49.517 12:34:18 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 90685 00:22:56.074 0 00:22:56.074 12:34:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 90660 00:22:56.074 12:34:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 90660 ']' 00:22:56.074 12:34:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 90660 00:22:56.074 12:34:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:56.074 12:34:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.074 12:34:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90660 00:22:56.074 killing process with pid 90660 00:22:56.074 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.074 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.074 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90660' 00:22:56.075 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 90660 00:22:56.075 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 90660 00:22:56.075 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:56.075 [2024-07-12 12:34:07.331968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:56.075 [2024-07-12 12:34:07.332194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90660 ] 00:22:56.075 [2024-07-12 12:34:07.476369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.075 [2024-07-12 12:34:07.575983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.075 [2024-07-12 12:34:07.632966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:56.075 Running I/O for 15 seconds... 00:22:56.075 [2024-07-12 12:34:10.109994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.075 [2024-07-12 12:34:10.110067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.110985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.075 [2024-07-12 12:34:10.111298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.075 [2024-07-12 12:34:10.111317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.111975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.111997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.076 [2024-07-12 12:34:10.112604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.076 [2024-07-12 12:34:10.112618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.112981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.112994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.077 [2024-07-12 12:34:10.113511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.077 [2024-07-12 12:34:10.113916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.077 [2024-07-12 12:34:10.113931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:10.113952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.113968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:10.113982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:10.114017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1520200 is same with the state(5) to be set 00:22:56.078 [2024-07-12 12:34:10.114049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.078 [2024-07-12 12:34:10.114061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.078 [2024-07-12 12:34:10.114072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:22:56.078 [2024-07-12 12:34:10.114085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114144] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1520200 was disconnected and freed. reset controller. 00:22:56.078 [2024-07-12 12:34:10.114162] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:56.078 [2024-07-12 12:34:10.114217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.078 [2024-07-12 12:34:10.114239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.078 [2024-07-12 12:34:10.114276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.078 [2024-07-12 12:34:10.114303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.078 [2024-07-12 12:34:10.114331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:10.114346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.078 [2024-07-12 12:34:10.114420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fef60 (9): Bad file descriptor 00:22:56.078 [2024-07-12 12:34:10.118222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.078 [2024-07-12 12:34:10.159332] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.078 [2024-07-12 12:34:13.717874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.717941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.717971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.078 [2024-07-12 12:34:13.718678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.078 [2024-07-12 12:34:13.718880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.078 [2024-07-12 12:34:13.718897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.718911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.718927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.718940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.718955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.718969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.718984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.718998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.079 [2024-07-12 12:34:13.719684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.079 [2024-07-12 12:34:13.719901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.079 [2024-07-12 12:34:13.719917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.719930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.719946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.719983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.719997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.720979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.720994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.721035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.721065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.721096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.721128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.721167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.080 [2024-07-12 12:34:13.721200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.721230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.080 [2024-07-12 12:34:13.721246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.080 [2024-07-12 12:34:13.721261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.081 [2024-07-12 12:34:13.721291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.081 [2024-07-12 12:34:13.721323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.081 [2024-07-12 12:34:13.721354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.081 [2024-07-12 12:34:13.721385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.081 [2024-07-12 12:34:13.721417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9250 is same with the state(5) to be set 00:22:56.081 [2024-07-12 12:34:13.721449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81216 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.721943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.721959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.721971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.721994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.722043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.722057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.722094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.722108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.722144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.722159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.722195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.722210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.722246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.722260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.081 [2024-07-12 12:34:13.722305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:22:56.081 [2024-07-12 12:34:13.722319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.081 [2024-07-12 12:34:13.722334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.081 [2024-07-12 12:34:13.722345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.082 [2024-07-12 12:34:13.722356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 00:22:56.082 [2024-07-12 12:34:13.722370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:13.722428] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f9250 was disconnected and freed. reset controller. 00:22:56.082 [2024-07-12 12:34:13.722451] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:56.082 [2024-07-12 12:34:13.722529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.082 [2024-07-12 12:34:13.722560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:13.722576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.082 [2024-07-12 12:34:13.722591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:13.722607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.082 [2024-07-12 12:34:13.722621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:13.722636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.082 [2024-07-12 12:34:13.722651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:13.722666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.082 [2024-07-12 12:34:13.722718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fef60 (9): Bad file descriptor 00:22:56.082 [2024-07-12 12:34:13.726539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.082 [2024-07-12 12:34:13.761992] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.082 [2024-07-12 12:34:18.283678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.283802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.283838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.283868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.283897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.283927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.283956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.283970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.284029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.082 [2024-07-12 12:34:18.284745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.284782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.284827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.082 [2024-07-12 12:34:18.284843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.082 [2024-07-12 12:34:18.284857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.284872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.284886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.284902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.284915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.284931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.284945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.284986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.083 [2024-07-12 12:34:18.285758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.083 [2024-07-12 12:34:18.285815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.083 [2024-07-12 12:34:18.285829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.285845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.285858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.285874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.285888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.285903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.285917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.285933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.285946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.285970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.084 [2024-07-12 12:34:18.286759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.286980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.084 [2024-07-12 12:34:18.286996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.084 [2024-07-12 12:34:18.287010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.085 [2024-07-12 12:34:18.287232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f9250 is same with the state(5) to be set 00:22:56.085 [2024-07-12 12:34:18.287264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34928 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35320 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35328 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35336 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35344 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35352 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35360 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35368 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35376 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35384 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35392 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35400 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35408 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.287950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.287964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.287982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35416 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.288008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.288022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.288032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.288043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35424 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.288056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.288070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.288085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.288097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35432 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.288110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.288125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.085 [2024-07-12 12:34:18.288135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.085 [2024-07-12 12:34:18.288145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35440 len:8 PRP1 0x0 PRP2 0x0 00:22:56.085 [2024-07-12 12:34:18.288159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.288216] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14f9250 was disconnected and freed. reset controller. 00:22:56.085 [2024-07-12 12:34:18.288239] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:56.085 [2024-07-12 12:34:18.288296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.085 [2024-07-12 12:34:18.288317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.085 [2024-07-12 12:34:18.288333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.086 [2024-07-12 12:34:18.288347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.086 [2024-07-12 12:34:18.288362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.086 [2024-07-12 12:34:18.288375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.086 [2024-07-12 12:34:18.288390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.086 [2024-07-12 12:34:18.288404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.086 [2024-07-12 12:34:18.288418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.086 [2024-07-12 12:34:18.288465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fef60 (9): Bad file descriptor 00:22:56.086 [2024-07-12 12:34:18.292277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.086 [2024-07-12 12:34:18.325269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.086 00:22:56.086 Latency(us) 00:22:56.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.086 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:56.086 Verification LBA range: start 0x0 length 0x4000 00:22:56.086 NVMe0n1 : 15.01 8973.69 35.05 227.94 0.00 13877.64 655.36 14656.23 00:22:56.086 =================================================================================================================== 00:22:56.086 Total : 8973.69 35.05 227.94 0.00 13877.64 655.36 14656.23 00:22:56.086 Received shutdown signal, test time was about 15.000000 seconds 00:22:56.086 00:22:56.086 Latency(us) 00:22:56.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.086 =================================================================================================================== 00:22:56.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:56.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90857 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90857 /var/tmp/bdevperf.sock 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 90857 ']' 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.086 12:34:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:56.344 12:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.344 12:34:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:56.344 12:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.602 [2024-07-12 12:34:25.500417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:56.602 12:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:56.860 [2024-07-12 12:34:25.808710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:56.860 12:34:25 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.117 NVMe0n1 00:22:57.117 12:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.376 00:22:57.634 12:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.891 00:22:57.891 12:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:57.891 12:34:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.149 12:34:27 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.408 12:34:27 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:01.743 12:34:30 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.743 12:34:30 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:01.743 12:34:30 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90935 00:23:01.743 12:34:30 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.743 12:34:30 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 90935 00:23:03.117 0 00:23:03.117 12:34:31 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:03.117 [2024-07-12 12:34:24.273991] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:03.117 [2024-07-12 12:34:24.274100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90857 ] 00:23:03.117 [2024-07-12 12:34:24.417229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.117 [2024-07-12 12:34:24.506857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.117 [2024-07-12 12:34:24.565270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:03.117 [2024-07-12 12:34:27.422409] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:03.117 [2024-07-12 12:34:27.422574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.117 [2024-07-12 12:34:27.422601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.117 [2024-07-12 12:34:27.422620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.117 [2024-07-12 12:34:27.422634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.117 [2024-07-12 12:34:27.422648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.117 [2024-07-12 12:34:27.422662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.117 [2024-07-12 12:34:27.422676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.117 [2024-07-12 12:34:27.422688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.117 [2024-07-12 12:34:27.422702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:03.117 [2024-07-12 12:34:27.422754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:03.117 [2024-07-12 12:34:27.422786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b1f60 (9): Bad file descriptor 00:23:03.117 [2024-07-12 12:34:27.431074] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:03.117 Running I/O for 1 seconds... 00:23:03.117 00:23:03.117 Latency(us) 00:23:03.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.117 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:03.117 Verification LBA range: start 0x0 length 0x4000 00:23:03.117 NVMe0n1 : 1.02 7639.70 29.84 0.00 0.00 16604.40 3038.49 18707.55 00:23:03.117 =================================================================================================================== 00:23:03.117 Total : 7639.70 29.84 0.00 0.00 16604.40 3038.49 18707.55 00:23:03.117 12:34:31 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.117 12:34:31 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:03.376 12:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.691 12:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:03.691 12:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.691 12:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:03.951 12:34:32 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:07.232 12:34:35 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.232 12:34:35 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 90857 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 90857 ']' 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 90857 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90857 00:23:07.232 killing process with pid 90857 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90857' 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 90857 00:23:07.232 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 90857 00:23:07.498 12:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:07.498 12:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.784 rmmod nvme_tcp 00:23:07.784 rmmod nvme_fabrics 00:23:07.784 rmmod nvme_keyring 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 90598 ']' 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 90598 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 90598 ']' 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 90598 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90598 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:07.784 killing process with pid 90598 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90598' 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 90598 00:23:07.784 12:34:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 90598 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.043 12:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.301 12:34:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:08.301 ************************************ 00:23:08.301 END TEST nvmf_failover 00:23:08.301 ************************************ 00:23:08.301 00:23:08.301 real 0m33.287s 00:23:08.301 user 2m9.220s 00:23:08.301 sys 0m5.818s 00:23:08.301 12:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.301 12:34:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:08.301 12:34:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:08.301 12:34:37 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:08.301 12:34:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.301 12:34:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.301 12:34:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.301 ************************************ 00:23:08.301 START TEST nvmf_host_discovery 00:23:08.302 ************************************ 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:08.302 * Looking for test storage... 00:23:08.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:08.302 Cannot find device "nvmf_tgt_br" 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.302 Cannot find device "nvmf_tgt_br2" 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:08.302 Cannot find device "nvmf_tgt_br" 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:08.302 Cannot find device "nvmf_tgt_br2" 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:23:08.302 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:08.560 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:08.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:23:08.561 00:23:08.561 --- 10.0.0.2 ping statistics --- 00:23:08.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.561 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:08.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:08.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:08.561 00:23:08.561 --- 10.0.0.3 ping statistics --- 00:23:08.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.561 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:08.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:08.561 00:23:08.561 --- 10.0.0.1 ping statistics --- 00:23:08.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.561 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=91204 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 91204 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 91204 ']' 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.561 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.819 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.819 12:34:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.819 [2024-07-12 12:34:37.695096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:08.819 [2024-07-12 12:34:37.695202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.819 [2024-07-12 12:34:37.837347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.077 [2024-07-12 12:34:37.926478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.077 [2024-07-12 12:34:37.926561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.077 [2024-07-12 12:34:37.926589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.077 [2024-07-12 12:34:37.926598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.077 [2024-07-12 12:34:37.926605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.077 [2024-07-12 12:34:37.926630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.077 [2024-07-12 12:34:37.984292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.642 [2024-07-12 12:34:38.718382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.642 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 [2024-07-12 12:34:38.726493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 null0 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 null1 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91232 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91232 /tmp/host.sock 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 91232 ']' 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.900 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.900 12:34:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.900 [2024-07-12 12:34:38.811478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:09.900 [2024-07-12 12:34:38.811582] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91232 ] 00:23:09.901 [2024-07-12 12:34:38.949099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.158 [2024-07-12 12:34:39.054117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.159 [2024-07-12 12:34:39.108235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.762 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:10.763 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:11.021 12:34:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:11.021 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.280 [2024-07-12 12:34:40.106821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:11.280 12:34:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:11.847 [2024-07-12 12:34:40.771460] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:11.847 [2024-07-12 12:34:40.771504] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:11.847 [2024-07-12 12:34:40.771523] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:11.847 [2024-07-12 12:34:40.777551] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:11.847 [2024-07-12 12:34:40.834930] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:11.847 [2024-07-12 12:34:40.834960] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:12.413 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:12.414 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.672 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.673 [2024-07-12 12:34:41.652302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:12.673 [2024-07-12 12:34:41.653102] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:12.673 [2024-07-12 12:34:41.653141] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:12.673 [2024-07-12 12:34:41.659105] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.673 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.673 [2024-07-12 12:34:41.717356] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:12.673 [2024-07-12 12:34:41.717385] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:12.673 [2024-07-12 12:34:41.717392] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.932 [2024-07-12 12:34:41.901355] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:12.932 [2024-07-12 12:34:41.901393] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:12.932 [2024-07-12 12:34:41.901963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.932 [2024-07-12 12:34:41.902006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.932 [2024-07-12 12:34:41.902021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.932 [2024-07-12 12:34:41.902030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.932 [2024-07-12 12:34:41.902040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.932 [2024-07-12 12:34:41.902049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.932 [2024-07-12 12:34:41.902059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:12.932 [2024-07-12 12:34:41.902068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:12.932 [2024-07-12 12:34:41.902077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23511e0 is same with the state(5) to be set 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:12.932 [2024-07-12 12:34:41.907348] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:12.932 [2024-07-12 12:34:41.907380] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:12.932 [2024-07-12 12:34:41.907471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23511e0 (9): Bad file descriptor 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.932 12:34:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.191 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.450 12:34:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.385 [2024-07-12 12:34:43.312358] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:14.385 [2024-07-12 12:34:43.312564] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:14.385 [2024-07-12 12:34:43.312628] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.385 [2024-07-12 12:34:43.318395] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:14.385 [2024-07-12 12:34:43.379193] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:14.385 [2024-07-12 12:34:43.379402] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.385 request: 00:23:14.385 { 00:23:14.385 "name": "nvme", 00:23:14.385 "trtype": "tcp", 00:23:14.385 "traddr": "10.0.0.2", 00:23:14.385 "adrfam": "ipv4", 00:23:14.385 "trsvcid": "8009", 00:23:14.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:14.385 "wait_for_attach": true, 00:23:14.385 "method": "bdev_nvme_start_discovery", 00:23:14.385 "req_id": 1 00:23:14.385 } 00:23:14.385 Got JSON-RPC error response 00:23:14.385 response: 00:23:14.385 { 00:23:14.385 "code": -17, 00:23:14.385 "message": "File exists" 00:23:14.385 } 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.385 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 request: 00:23:14.644 { 00:23:14.644 "name": "nvme_second", 00:23:14.644 "trtype": "tcp", 00:23:14.644 "traddr": "10.0.0.2", 00:23:14.644 "adrfam": "ipv4", 00:23:14.644 "trsvcid": "8009", 00:23:14.644 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:14.644 "wait_for_attach": true, 00:23:14.644 "method": "bdev_nvme_start_discovery", 00:23:14.644 "req_id": 1 00:23:14.644 } 00:23:14.644 Got JSON-RPC error response 00:23:14.644 response: 00:23:14.644 { 00:23:14.644 "code": -17, 00:23:14.644 "message": "File exists" 00:23:14.644 } 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.644 12:34:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.579 [2024-07-12 12:34:44.627878] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.579 [2024-07-12 12:34:44.627966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2386780 with addr=10.0.0.2, port=8010 00:23:15.579 [2024-07-12 12:34:44.627993] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:15.579 [2024-07-12 12:34:44.628004] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:15.579 [2024-07-12 12:34:44.628014] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:16.955 [2024-07-12 12:34:45.627859] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.955 [2024-07-12 12:34:45.627935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2386780 with addr=10.0.0.2, port=8010 00:23:16.955 [2024-07-12 12:34:45.627961] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:16.955 [2024-07-12 12:34:45.627972] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:16.955 [2024-07-12 12:34:45.627982] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:17.572 [2024-07-12 12:34:46.627693] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:17.572 request: 00:23:17.572 { 00:23:17.572 "name": "nvme_second", 00:23:17.572 "trtype": "tcp", 00:23:17.572 "traddr": "10.0.0.2", 00:23:17.572 "adrfam": "ipv4", 00:23:17.572 "trsvcid": "8010", 00:23:17.572 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:17.572 "wait_for_attach": false, 00:23:17.572 "attach_timeout_ms": 3000, 00:23:17.572 "method": "bdev_nvme_start_discovery", 00:23:17.572 "req_id": 1 00:23:17.572 } 00:23:17.572 Got JSON-RPC error response 00:23:17.572 response: 00:23:17.572 { 00:23:17.572 "code": -110, 00:23:17.572 "message": "Connection timed out" 00:23:17.572 } 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.572 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91232 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.831 rmmod nvme_tcp 00:23:17.831 rmmod nvme_fabrics 00:23:17.831 rmmod nvme_keyring 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 91204 ']' 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 91204 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 91204 ']' 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 91204 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91204 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:17.831 killing process with pid 91204 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91204' 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 91204 00:23:17.831 12:34:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 91204 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:18.090 00:23:18.090 real 0m9.924s 00:23:18.090 user 0m19.043s 00:23:18.090 sys 0m1.916s 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.090 12:34:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.090 ************************************ 00:23:18.090 END TEST nvmf_host_discovery 00:23:18.090 ************************************ 00:23:18.090 12:34:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:18.090 12:34:47 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:18.090 12:34:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:18.090 12:34:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.090 12:34:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.348 ************************************ 00:23:18.348 START TEST nvmf_host_multipath_status 00:23:18.348 ************************************ 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:18.348 * Looking for test storage... 00:23:18.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:18.348 Cannot find device "nvmf_tgt_br" 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.348 Cannot find device "nvmf_tgt_br2" 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:18.348 Cannot find device "nvmf_tgt_br" 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:18.348 Cannot find device "nvmf_tgt_br2" 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:23:18.348 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:18.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:18.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:18.349 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:18.606 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:18.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:23:18.607 00:23:18.607 --- 10.0.0.2 ping statistics --- 00:23:18.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.607 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:18.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:18.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:18.607 00:23:18.607 --- 10.0.0.3 ping statistics --- 00:23:18.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.607 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:18.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:18.607 00:23:18.607 --- 10.0.0.1 ping statistics --- 00:23:18.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.607 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=91688 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 91688 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 91688 ']' 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.607 12:34:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:18.607 [2024-07-12 12:34:47.667926] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:18.607 [2024-07-12 12:34:47.668177] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.864 [2024-07-12 12:34:47.804871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:18.864 [2024-07-12 12:34:47.895828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.864 [2024-07-12 12:34:47.895892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.864 [2024-07-12 12:34:47.895907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.864 [2024-07-12 12:34:47.895917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.864 [2024-07-12 12:34:47.895926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.864 [2024-07-12 12:34:47.896046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.864 [2024-07-12 12:34:47.896206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.120 [2024-07-12 12:34:47.951597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:19.686 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91688 00:23:19.687 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:19.944 [2024-07-12 12:34:48.908834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.944 12:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:20.208 Malloc0 00:23:20.208 12:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:20.485 12:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:20.743 12:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:21.001 [2024-07-12 12:34:49.956520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.001 12:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.259 [2024-07-12 12:34:50.192660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91738 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91738 /var/tmp/bdevperf.sock 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 91738 ']' 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.259 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.260 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.260 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.260 12:34:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:22.191 12:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.191 12:34:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:22.191 12:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:22.447 12:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:23.013 Nvme0n1 00:23:23.013 12:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:23.362 Nvme0n1 00:23:23.362 12:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:23.362 12:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:25.255 12:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:25.255 12:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:25.511 12:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:25.767 12:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:26.698 12:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:26.698 12:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:26.698 12:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.698 12:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:26.955 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.955 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:26.955 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.955 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.212 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.212 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:27.212 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.212 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:27.777 12:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:28.341 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:28.598 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:28.855 12:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:29.842 12:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:29.842 12:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:29.842 12:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.842 12:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.099 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.099 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:30.099 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.099 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.356 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.356 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.356 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.356 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.614 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.614 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.614 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.614 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:30.871 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.871 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:30.871 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:30.871 12:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.128 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.128 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:31.128 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.128 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.385 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.385 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:31.385 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:31.642 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:31.900 12:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:32.832 12:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:32.832 12:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:32.832 12:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.832 12:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:33.090 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.090 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:33.090 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:33.090 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.654 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.911 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.911 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:33.911 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.911 12:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:34.167 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.167 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:34.167 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.167 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.422 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.422 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:34.422 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:34.679 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:34.936 12:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:35.870 12:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:35.870 12:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.870 12:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.870 12:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.127 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.127 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:36.127 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.127 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.386 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.386 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.386 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.386 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.952 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.952 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.952 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.952 12:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.209 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.479 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.479 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:37.479 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:37.743 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:38.001 12:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:38.935 12:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:38.935 12:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:38.935 12:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.935 12:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.193 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.193 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:39.193 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.193 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.451 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.452 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.452 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.452 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.709 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.709 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.709 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.709 12:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.275 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.275 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:40.275 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.275 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.532 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.533 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:40.533 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.533 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.790 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.790 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:40.790 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:41.048 12:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.307 12:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:42.240 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:42.240 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:42.240 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.240 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:42.497 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.497 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:42.497 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.497 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.755 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.755 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.755 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.755 12:35:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.013 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.013 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.013 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.013 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.271 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.272 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:43.272 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.272 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.530 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.530 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.530 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.530 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.787 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.787 12:35:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:44.044 12:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:44.044 12:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:44.301 12:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:44.559 12:35:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:45.491 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:45.491 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:45.491 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.491 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.748 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.748 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:45.748 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.748 12:35:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.007 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.007 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.007 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.007 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.265 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.265 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:46.265 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.265 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:46.522 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.522 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:46.523 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.523 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:46.800 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.800 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:46.800 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.800 12:35:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.057 12:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.057 12:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:47.057 12:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:47.315 12:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:47.573 12:35:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:48.945 12:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:48.945 12:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:48.945 12:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.945 12:35:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.945 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.945 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:48.945 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.945 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.202 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.202 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.202 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.202 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:49.767 12:35:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.025 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.026 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.026 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.026 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.283 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.283 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:50.283 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:50.916 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:50.916 12:35:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:51.848 12:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:51.848 12:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.848 12:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.848 12:35:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.106 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.106 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:52.106 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.106 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.363 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.363 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.363 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.363 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.623 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.623 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.623 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.623 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.880 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.880 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.880 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.880 12:35:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.136 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.137 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:53.137 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.137 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.394 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.394 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:53.394 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:53.651 12:35:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:54.217 12:35:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:55.157 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:55.157 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.157 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.157 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.415 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.415 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.415 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.415 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.672 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.672 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.672 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.672 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.929 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.929 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.929 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.929 12:35:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.187 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.187 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.187 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.187 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.444 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.444 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:56.444 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.444 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.701 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.701 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91738 00:23:56.701 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 91738 ']' 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 91738 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91738 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:56.702 killing process with pid 91738 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91738' 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 91738 00:23:56.702 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 91738 00:23:56.965 Connection closed with partial response: 00:23:56.965 00:23:56.965 00:23:56.965 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91738 00:23:56.965 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:56.965 [2024-07-12 12:34:50.259872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:56.965 [2024-07-12 12:34:50.259991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91738 ] 00:23:56.965 [2024-07-12 12:34:50.395405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.965 [2024-07-12 12:34:50.512697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.965 [2024-07-12 12:34:50.568506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:56.965 Running I/O for 90 seconds... 00:23:56.965 [2024-07-12 12:35:06.726913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.965 [2024-07-12 12:35:06.727054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.965 [2024-07-12 12:35:06.727163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.965 [2024-07-12 12:35:06.727213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.965 [2024-07-12 12:35:06.727263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.965 [2024-07-12 12:35:06.727324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.965 [2024-07-12 12:35:06.727372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.727984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.965 [2024-07-12 12:35:06.728324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:56.965 [2024-07-12 12:35:06.728351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.728370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.728416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.728462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.728509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.728554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.728956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.728975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.729454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.729964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.729984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.730044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.730090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.730136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.730183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.966 [2024-07-12 12:35:06.730229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.730277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.730323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:56.966 [2024-07-12 12:35:06.730350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.966 [2024-07-12 12:35:06.730369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.730417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.730463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.730510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.730556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.730612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.730969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.730996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.967 [2024-07-12 12:35:06.731416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.731980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.731999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.732303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.732323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.733175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.967 [2024-07-12 12:35:06.733211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:56.967 [2024-07-12 12:35:06.733259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.733289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.733361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.733419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.733475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.733948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:06.733973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:06.734430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:06.734451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.992723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.992812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.992851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.992870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.992898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.992913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.992935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.992950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.992972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.968 [2024-07-12 12:35:22.993627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:56.968 [2024-07-12 12:35:22.993649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.968 [2024-07-12 12:35:22.993664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.993978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.993995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.994405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.994586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.994601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.969 [2024-07-12 12:35:22.996587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:56.969 [2024-07-12 12:35:22.996609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.969 [2024-07-12 12:35:22.996625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.996904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.996941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.996964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.996979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.997351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.997554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.997592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.997632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.997874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.997971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.997986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.998040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.998081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.998119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.970 [2024-07-12 12:35:22.998157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.998194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.970 [2024-07-12 12:35:22.998269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:56.970 [2024-07-12 12:35:22.998291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:22.998577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:22.998615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:22.998638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:22.998654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.000976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.001842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.001979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.971 [2024-07-12 12:35:23.001996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.971 [2024-07-12 12:35:23.002266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:56.971 [2024-07-12 12:35:23.002296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.002314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.002352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.002401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.002478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.002515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.002590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.002612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.002627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.004838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.004976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.004992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.005068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.005106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.005244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.005281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.972 [2024-07-12 12:35:23.005485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:56.972 [2024-07-12 12:35:23.005508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.972 [2024-07-12 12:35:23.005523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.005599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.005637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.005675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.005930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.005969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.005991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.006007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.007955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.007987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.973 [2024-07-12 12:35:23.008883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.973 [2024-07-12 12:35:23.008958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:56.973 [2024-07-12 12:35:23.008980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.008996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.009019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.009035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.009057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.009073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.009097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.009113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.009135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.009150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.009172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.009187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.009210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.009226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.010883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.010956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.010975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.010999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.011795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.011970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.011993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.012009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.012084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.974 [2024-07-12 12:35:23.012123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.012162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.012201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.012239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.012277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:56.974 [2024-07-12 12:35:23.012299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.974 [2024-07-12 12:35:23.012315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.012337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.012353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.012376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.012391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.012414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.012438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.012462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.012479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.012502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.012517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.012540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.015438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.015504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.015704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.015899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.015936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.975 [2024-07-12 12:35:23.015974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:56.975 [2024-07-12 12:35:23.015997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.975 [2024-07-12 12:35:23.016012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:56.975 Received shutdown signal, test time was about 33.374133 seconds 00:23:56.975 00:23:56.975 Latency(us) 00:23:56.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.975 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:56.975 Verification LBA range: start 0x0 length 0x4000 00:23:56.975 Nvme0n1 : 33.37 8732.04 34.11 0.00 0.00 14626.10 606.95 4026531.84 00:23:56.975 =================================================================================================================== 00:23:56.975 Total : 8732.04 34.11 0.00 0.00 14626.10 606.95 4026531.84 00:23:56.975 12:35:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.231 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.231 rmmod nvme_tcp 00:23:57.231 rmmod nvme_fabrics 00:23:57.231 rmmod nvme_keyring 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 91688 ']' 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 91688 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 91688 ']' 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 91688 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91688 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:57.486 killing process with pid 91688 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91688' 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 91688 00:23:57.486 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 91688 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:57.750 ************************************ 00:23:57.750 END TEST nvmf_host_multipath_status 00:23:57.750 ************************************ 00:23:57.750 00:23:57.750 real 0m39.453s 00:23:57.750 user 2m7.075s 00:23:57.750 sys 0m12.046s 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:57.750 12:35:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.750 12:35:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:57.750 12:35:26 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:57.750 12:35:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:57.750 12:35:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.750 12:35:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:57.750 ************************************ 00:23:57.750 START TEST nvmf_discovery_remove_ifc 00:23:57.750 ************************************ 00:23:57.750 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:57.750 * Looking for test storage... 00:23:57.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:57.750 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.750 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:57.750 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:57.751 Cannot find device "nvmf_tgt_br" 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:23:57.751 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:58.029 Cannot find device "nvmf_tgt_br2" 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:58.029 Cannot find device "nvmf_tgt_br" 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:58.029 Cannot find device "nvmf_tgt_br2" 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:58.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:58.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:58.029 12:35:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:58.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:23:58.029 00:23:58.029 --- 10.0.0.2 ping statistics --- 00:23:58.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.029 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:58.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:58.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:23:58.029 00:23:58.029 --- 10.0.0.3 ping statistics --- 00:23:58.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.029 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:58.029 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:58.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:23:58.285 00:23:58.285 --- 10.0.0.1 ping statistics --- 00:23:58.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.285 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=92519 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 92519 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 92519 ']' 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.285 12:35:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.285 [2024-07-12 12:35:27.171600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:58.285 [2024-07-12 12:35:27.171678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.285 [2024-07-12 12:35:27.307703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.542 [2024-07-12 12:35:27.428202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.542 [2024-07-12 12:35:27.428278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.542 [2024-07-12 12:35:27.428291] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.542 [2024-07-12 12:35:27.428300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.542 [2024-07-12 12:35:27.428307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.542 [2024-07-12 12:35:27.428341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.542 [2024-07-12 12:35:27.502444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.473 [2024-07-12 12:35:28.244532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.473 [2024-07-12 12:35:28.252652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:59.473 null0 00:23:59.473 [2024-07-12 12:35:28.284570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92556 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92556 /tmp/host.sock 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 92556 ']' 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.473 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.473 12:35:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.473 [2024-07-12 12:35:28.366868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:59.473 [2024-07-12 12:35:28.366989] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92556 ] 00:23:59.473 [2024-07-12 12:35:28.506458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.730 [2024-07-12 12:35:28.631197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.293 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.294 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:00.294 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.294 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.550 [2024-07-12 12:35:29.404065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:00.550 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.550 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:00.550 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.550 12:35:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.478 [2024-07-12 12:35:30.467754] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:01.478 [2024-07-12 12:35:30.467836] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:01.478 [2024-07-12 12:35:30.467859] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:01.478 [2024-07-12 12:35:30.473820] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:01.478 [2024-07-12 12:35:30.531659] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:01.478 [2024-07-12 12:35:30.531769] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:01.478 [2024-07-12 12:35:30.531817] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:01.478 [2024-07-12 12:35:30.531844] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:01.478 [2024-07-12 12:35:30.531879] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.478 [2024-07-12 12:35:30.536243] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2015ae0 was disconnected and freed. delete nvme_qpair. 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.478 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:01.733 12:35:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:02.659 12:35:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:04.027 12:35:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:04.957 12:35:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:05.943 12:35:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:06.872 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.130 [2024-07-12 12:35:35.958783] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:07.130 [2024-07-12 12:35:35.958874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.130 [2024-07-12 12:35:35.958891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.130 [2024-07-12 12:35:35.958906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.130 [2024-07-12 12:35:35.958917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.130 [2024-07-12 12:35:35.958928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.130 [2024-07-12 12:35:35.958937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.130 [2024-07-12 12:35:35.958948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.130 [2024-07-12 12:35:35.958957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.131 [2024-07-12 12:35:35.958970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.131 [2024-07-12 12:35:35.958979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.131 [2024-07-12 12:35:35.958989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9e50 is same with the state(5) to be set 00:24:07.131 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.131 [2024-07-12 12:35:35.968777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd9e50 (9): Bad file descriptor 00:24:07.131 [2024-07-12 12:35:35.978814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:07.131 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.131 12:35:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.063 [2024-07-12 12:35:36.982840] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:24:08.063 [2024-07-12 12:35:36.982948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd9e50 with addr=10.0.0.2, port=4420 00:24:08.063 [2024-07-12 12:35:36.982976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9e50 is same with the state(5) to be set 00:24:08.063 [2024-07-12 12:35:36.983039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd9e50 (9): Bad file descriptor 00:24:08.063 [2024-07-12 12:35:36.983144] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.063 [2024-07-12 12:35:36.983179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:08.063 [2024-07-12 12:35:36.983194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:08.064 [2024-07-12 12:35:36.983211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:08.064 [2024-07-12 12:35:36.983251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.064 [2024-07-12 12:35:36.983268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:08.064 12:35:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.995 [2024-07-12 12:35:37.983352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:08.995 [2024-07-12 12:35:37.983417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:08.995 [2024-07-12 12:35:37.983430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:08.995 [2024-07-12 12:35:37.983442] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:08.995 [2024-07-12 12:35:37.983481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.996 [2024-07-12 12:35:37.983516] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:08.996 [2024-07-12 12:35:37.983572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.996 [2024-07-12 12:35:37.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.996 [2024-07-12 12:35:37.983605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.996 [2024-07-12 12:35:37.983615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.996 [2024-07-12 12:35:37.983627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.996 [2024-07-12 12:35:37.983636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.996 [2024-07-12 12:35:37.983647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.996 [2024-07-12 12:35:37.983656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.996 [2024-07-12 12:35:37.983667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.996 [2024-07-12 12:35:37.983676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.996 [2024-07-12 12:35:37.983686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:08.996 [2024-07-12 12:35:37.984230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd9440 (9): Bad file descriptor 00:24:08.996 [2024-07-12 12:35:37.985241] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:08.996 [2024-07-12 12:35:37.985264] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.996 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:09.253 12:35:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:10.184 12:35:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.115 [2024-07-12 12:35:39.992017] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.115 [2024-07-12 12:35:39.992063] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.115 [2024-07-12 12:35:39.992082] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.115 [2024-07-12 12:35:39.998053] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:11.115 [2024-07-12 12:35:40.054418] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:11.115 [2024-07-12 12:35:40.054472] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:11.115 [2024-07-12 12:35:40.054496] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:11.115 [2024-07-12 12:35:40.054513] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:11.115 [2024-07-12 12:35:40.054523] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.115 [2024-07-12 12:35:40.060685] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ff3c40 was disconnected and freed. delete nvme_qpair. 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92556 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 92556 ']' 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 92556 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:11.372 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.373 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92556 00:24:11.373 killing process with pid 92556 00:24:11.373 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.373 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.373 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92556' 00:24:11.373 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 92556 00:24:11.373 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 92556 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.630 rmmod nvme_tcp 00:24:11.630 rmmod nvme_fabrics 00:24:11.630 rmmod nvme_keyring 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 92519 ']' 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 92519 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 92519 ']' 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 92519 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92519 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.630 killing process with pid 92519 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92519' 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 92519 00:24:11.630 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 92519 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:11.888 ************************************ 00:24:11.888 END TEST nvmf_discovery_remove_ifc 00:24:11.888 ************************************ 00:24:11.888 00:24:11.888 real 0m14.236s 00:24:11.888 user 0m24.669s 00:24:11.888 sys 0m2.562s 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:11.888 12:35:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.888 12:35:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:11.888 12:35:40 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:11.888 12:35:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:11.888 12:35:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:11.888 12:35:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.145 ************************************ 00:24:12.145 START TEST nvmf_identify_kernel_target 00:24:12.145 ************************************ 00:24:12.145 12:35:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:12.145 * Looking for test storage... 00:24:12.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.145 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:12.146 Cannot find device "nvmf_tgt_br" 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:12.146 Cannot find device "nvmf_tgt_br2" 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:12.146 Cannot find device "nvmf_tgt_br" 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:12.146 Cannot find device "nvmf_tgt_br2" 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:12.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:24:12.146 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:12.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:12.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:24:12.403 00:24:12.403 --- 10.0.0.2 ping statistics --- 00:24:12.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.403 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:12.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:12.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:24:12.403 00:24:12.403 --- 10.0.0.3 ping statistics --- 00:24:12.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.403 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:12.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:12.403 00:24:12.403 --- 10.0.0.1 ping statistics --- 00:24:12.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.403 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:24:12.403 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:12.404 12:35:41 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:12.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:12.967 Waiting for block devices as requested 00:24:12.967 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:12.967 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:12.967 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:13.224 No valid GPT data, bailing 00:24:13.224 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:13.224 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:13.224 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:13.224 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:13.224 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.224 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:13.225 No valid GPT data, bailing 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:13.225 No valid GPT data, bailing 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:13.225 No valid GPT data, bailing 00:24:13.225 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:13.484 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:13.484 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:13.484 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:13.484 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:13.484 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.484 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -a 10.0.0.1 -t tcp -s 4420 00:24:13.485 00:24:13.485 Discovery Log Number of Records 2, Generation counter 2 00:24:13.485 =====Discovery Log Entry 0====== 00:24:13.485 trtype: tcp 00:24:13.485 adrfam: ipv4 00:24:13.485 subtype: current discovery subsystem 00:24:13.485 treq: not specified, sq flow control disable supported 00:24:13.485 portid: 1 00:24:13.485 trsvcid: 4420 00:24:13.485 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:13.485 traddr: 10.0.0.1 00:24:13.485 eflags: none 00:24:13.485 sectype: none 00:24:13.485 =====Discovery Log Entry 1====== 00:24:13.485 trtype: tcp 00:24:13.485 adrfam: ipv4 00:24:13.485 subtype: nvme subsystem 00:24:13.485 treq: not specified, sq flow control disable supported 00:24:13.485 portid: 1 00:24:13.485 trsvcid: 4420 00:24:13.485 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:13.485 traddr: 10.0.0.1 00:24:13.485 eflags: none 00:24:13.485 sectype: none 00:24:13.485 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:13.485 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:13.485 ===================================================== 00:24:13.485 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:13.485 ===================================================== 00:24:13.485 Controller Capabilities/Features 00:24:13.485 ================================ 00:24:13.485 Vendor ID: 0000 00:24:13.485 Subsystem Vendor ID: 0000 00:24:13.485 Serial Number: d40d4c179712301f1d79 00:24:13.485 Model Number: Linux 00:24:13.485 Firmware Version: 6.7.0-68 00:24:13.485 Recommended Arb Burst: 0 00:24:13.485 IEEE OUI Identifier: 00 00 00 00:24:13.485 Multi-path I/O 00:24:13.485 May have multiple subsystem ports: No 00:24:13.485 May have multiple controllers: No 00:24:13.485 Associated with SR-IOV VF: No 00:24:13.485 Max Data Transfer Size: Unlimited 00:24:13.485 Max Number of Namespaces: 0 00:24:13.485 Max Number of I/O Queues: 1024 00:24:13.485 NVMe Specification Version (VS): 1.3 00:24:13.485 NVMe Specification Version (Identify): 1.3 00:24:13.485 Maximum Queue Entries: 1024 00:24:13.485 Contiguous Queues Required: No 00:24:13.485 Arbitration Mechanisms Supported 00:24:13.485 Weighted Round Robin: Not Supported 00:24:13.485 Vendor Specific: Not Supported 00:24:13.485 Reset Timeout: 7500 ms 00:24:13.485 Doorbell Stride: 4 bytes 00:24:13.485 NVM Subsystem Reset: Not Supported 00:24:13.485 Command Sets Supported 00:24:13.485 NVM Command Set: Supported 00:24:13.485 Boot Partition: Not Supported 00:24:13.485 Memory Page Size Minimum: 4096 bytes 00:24:13.485 Memory Page Size Maximum: 4096 bytes 00:24:13.485 Persistent Memory Region: Not Supported 00:24:13.485 Optional Asynchronous Events Supported 00:24:13.485 Namespace Attribute Notices: Not Supported 00:24:13.485 Firmware Activation Notices: Not Supported 00:24:13.485 ANA Change Notices: Not Supported 00:24:13.485 PLE Aggregate Log Change Notices: Not Supported 00:24:13.485 LBA Status Info Alert Notices: Not Supported 00:24:13.485 EGE Aggregate Log Change Notices: Not Supported 00:24:13.485 Normal NVM Subsystem Shutdown event: Not Supported 00:24:13.485 Zone Descriptor Change Notices: Not Supported 00:24:13.485 Discovery Log Change Notices: Supported 00:24:13.485 Controller Attributes 00:24:13.485 128-bit Host Identifier: Not Supported 00:24:13.485 Non-Operational Permissive Mode: Not Supported 00:24:13.485 NVM Sets: Not Supported 00:24:13.485 Read Recovery Levels: Not Supported 00:24:13.485 Endurance Groups: Not Supported 00:24:13.485 Predictable Latency Mode: Not Supported 00:24:13.485 Traffic Based Keep ALive: Not Supported 00:24:13.485 Namespace Granularity: Not Supported 00:24:13.485 SQ Associations: Not Supported 00:24:13.485 UUID List: Not Supported 00:24:13.485 Multi-Domain Subsystem: Not Supported 00:24:13.485 Fixed Capacity Management: Not Supported 00:24:13.485 Variable Capacity Management: Not Supported 00:24:13.485 Delete Endurance Group: Not Supported 00:24:13.485 Delete NVM Set: Not Supported 00:24:13.485 Extended LBA Formats Supported: Not Supported 00:24:13.485 Flexible Data Placement Supported: Not Supported 00:24:13.485 00:24:13.485 Controller Memory Buffer Support 00:24:13.485 ================================ 00:24:13.485 Supported: No 00:24:13.485 00:24:13.485 Persistent Memory Region Support 00:24:13.485 ================================ 00:24:13.485 Supported: No 00:24:13.485 00:24:13.485 Admin Command Set Attributes 00:24:13.485 ============================ 00:24:13.485 Security Send/Receive: Not Supported 00:24:13.485 Format NVM: Not Supported 00:24:13.485 Firmware Activate/Download: Not Supported 00:24:13.485 Namespace Management: Not Supported 00:24:13.485 Device Self-Test: Not Supported 00:24:13.485 Directives: Not Supported 00:24:13.485 NVMe-MI: Not Supported 00:24:13.485 Virtualization Management: Not Supported 00:24:13.485 Doorbell Buffer Config: Not Supported 00:24:13.485 Get LBA Status Capability: Not Supported 00:24:13.485 Command & Feature Lockdown Capability: Not Supported 00:24:13.485 Abort Command Limit: 1 00:24:13.485 Async Event Request Limit: 1 00:24:13.485 Number of Firmware Slots: N/A 00:24:13.485 Firmware Slot 1 Read-Only: N/A 00:24:13.485 Firmware Activation Without Reset: N/A 00:24:13.485 Multiple Update Detection Support: N/A 00:24:13.485 Firmware Update Granularity: No Information Provided 00:24:13.485 Per-Namespace SMART Log: No 00:24:13.485 Asymmetric Namespace Access Log Page: Not Supported 00:24:13.485 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:13.485 Command Effects Log Page: Not Supported 00:24:13.485 Get Log Page Extended Data: Supported 00:24:13.485 Telemetry Log Pages: Not Supported 00:24:13.485 Persistent Event Log Pages: Not Supported 00:24:13.485 Supported Log Pages Log Page: May Support 00:24:13.485 Commands Supported & Effects Log Page: Not Supported 00:24:13.485 Feature Identifiers & Effects Log Page:May Support 00:24:13.485 NVMe-MI Commands & Effects Log Page: May Support 00:24:13.485 Data Area 4 for Telemetry Log: Not Supported 00:24:13.485 Error Log Page Entries Supported: 1 00:24:13.485 Keep Alive: Not Supported 00:24:13.485 00:24:13.485 NVM Command Set Attributes 00:24:13.485 ========================== 00:24:13.485 Submission Queue Entry Size 00:24:13.485 Max: 1 00:24:13.485 Min: 1 00:24:13.485 Completion Queue Entry Size 00:24:13.485 Max: 1 00:24:13.485 Min: 1 00:24:13.485 Number of Namespaces: 0 00:24:13.485 Compare Command: Not Supported 00:24:13.485 Write Uncorrectable Command: Not Supported 00:24:13.485 Dataset Management Command: Not Supported 00:24:13.485 Write Zeroes Command: Not Supported 00:24:13.485 Set Features Save Field: Not Supported 00:24:13.485 Reservations: Not Supported 00:24:13.485 Timestamp: Not Supported 00:24:13.485 Copy: Not Supported 00:24:13.485 Volatile Write Cache: Not Present 00:24:13.485 Atomic Write Unit (Normal): 1 00:24:13.485 Atomic Write Unit (PFail): 1 00:24:13.485 Atomic Compare & Write Unit: 1 00:24:13.485 Fused Compare & Write: Not Supported 00:24:13.485 Scatter-Gather List 00:24:13.485 SGL Command Set: Supported 00:24:13.485 SGL Keyed: Not Supported 00:24:13.486 SGL Bit Bucket Descriptor: Not Supported 00:24:13.486 SGL Metadata Pointer: Not Supported 00:24:13.486 Oversized SGL: Not Supported 00:24:13.486 SGL Metadata Address: Not Supported 00:24:13.486 SGL Offset: Supported 00:24:13.486 Transport SGL Data Block: Not Supported 00:24:13.486 Replay Protected Memory Block: Not Supported 00:24:13.486 00:24:13.486 Firmware Slot Information 00:24:13.486 ========================= 00:24:13.486 Active slot: 0 00:24:13.486 00:24:13.486 00:24:13.486 Error Log 00:24:13.486 ========= 00:24:13.486 00:24:13.486 Active Namespaces 00:24:13.486 ================= 00:24:13.486 Discovery Log Page 00:24:13.486 ================== 00:24:13.486 Generation Counter: 2 00:24:13.486 Number of Records: 2 00:24:13.486 Record Format: 0 00:24:13.486 00:24:13.486 Discovery Log Entry 0 00:24:13.486 ---------------------- 00:24:13.486 Transport Type: 3 (TCP) 00:24:13.486 Address Family: 1 (IPv4) 00:24:13.486 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:13.486 Entry Flags: 00:24:13.486 Duplicate Returned Information: 0 00:24:13.486 Explicit Persistent Connection Support for Discovery: 0 00:24:13.486 Transport Requirements: 00:24:13.486 Secure Channel: Not Specified 00:24:13.486 Port ID: 1 (0x0001) 00:24:13.486 Controller ID: 65535 (0xffff) 00:24:13.486 Admin Max SQ Size: 32 00:24:13.486 Transport Service Identifier: 4420 00:24:13.486 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:13.486 Transport Address: 10.0.0.1 00:24:13.486 Discovery Log Entry 1 00:24:13.486 ---------------------- 00:24:13.486 Transport Type: 3 (TCP) 00:24:13.486 Address Family: 1 (IPv4) 00:24:13.486 Subsystem Type: 2 (NVM Subsystem) 00:24:13.486 Entry Flags: 00:24:13.486 Duplicate Returned Information: 0 00:24:13.486 Explicit Persistent Connection Support for Discovery: 0 00:24:13.486 Transport Requirements: 00:24:13.486 Secure Channel: Not Specified 00:24:13.486 Port ID: 1 (0x0001) 00:24:13.486 Controller ID: 65535 (0xffff) 00:24:13.486 Admin Max SQ Size: 32 00:24:13.486 Transport Service Identifier: 4420 00:24:13.486 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:13.486 Transport Address: 10.0.0.1 00:24:13.486 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:13.781 get_feature(0x01) failed 00:24:13.781 get_feature(0x02) failed 00:24:13.781 get_feature(0x04) failed 00:24:13.781 ===================================================== 00:24:13.781 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:13.781 ===================================================== 00:24:13.781 Controller Capabilities/Features 00:24:13.781 ================================ 00:24:13.781 Vendor ID: 0000 00:24:13.781 Subsystem Vendor ID: 0000 00:24:13.781 Serial Number: c17c3af5bb12243bfd9d 00:24:13.781 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:13.781 Firmware Version: 6.7.0-68 00:24:13.781 Recommended Arb Burst: 6 00:24:13.781 IEEE OUI Identifier: 00 00 00 00:24:13.781 Multi-path I/O 00:24:13.781 May have multiple subsystem ports: Yes 00:24:13.781 May have multiple controllers: Yes 00:24:13.781 Associated with SR-IOV VF: No 00:24:13.781 Max Data Transfer Size: Unlimited 00:24:13.781 Max Number of Namespaces: 1024 00:24:13.781 Max Number of I/O Queues: 128 00:24:13.781 NVMe Specification Version (VS): 1.3 00:24:13.781 NVMe Specification Version (Identify): 1.3 00:24:13.781 Maximum Queue Entries: 1024 00:24:13.781 Contiguous Queues Required: No 00:24:13.781 Arbitration Mechanisms Supported 00:24:13.781 Weighted Round Robin: Not Supported 00:24:13.781 Vendor Specific: Not Supported 00:24:13.781 Reset Timeout: 7500 ms 00:24:13.781 Doorbell Stride: 4 bytes 00:24:13.781 NVM Subsystem Reset: Not Supported 00:24:13.781 Command Sets Supported 00:24:13.781 NVM Command Set: Supported 00:24:13.781 Boot Partition: Not Supported 00:24:13.781 Memory Page Size Minimum: 4096 bytes 00:24:13.781 Memory Page Size Maximum: 4096 bytes 00:24:13.781 Persistent Memory Region: Not Supported 00:24:13.781 Optional Asynchronous Events Supported 00:24:13.781 Namespace Attribute Notices: Supported 00:24:13.781 Firmware Activation Notices: Not Supported 00:24:13.781 ANA Change Notices: Supported 00:24:13.781 PLE Aggregate Log Change Notices: Not Supported 00:24:13.781 LBA Status Info Alert Notices: Not Supported 00:24:13.781 EGE Aggregate Log Change Notices: Not Supported 00:24:13.781 Normal NVM Subsystem Shutdown event: Not Supported 00:24:13.781 Zone Descriptor Change Notices: Not Supported 00:24:13.781 Discovery Log Change Notices: Not Supported 00:24:13.781 Controller Attributes 00:24:13.781 128-bit Host Identifier: Supported 00:24:13.781 Non-Operational Permissive Mode: Not Supported 00:24:13.781 NVM Sets: Not Supported 00:24:13.781 Read Recovery Levels: Not Supported 00:24:13.781 Endurance Groups: Not Supported 00:24:13.781 Predictable Latency Mode: Not Supported 00:24:13.781 Traffic Based Keep ALive: Supported 00:24:13.781 Namespace Granularity: Not Supported 00:24:13.781 SQ Associations: Not Supported 00:24:13.781 UUID List: Not Supported 00:24:13.781 Multi-Domain Subsystem: Not Supported 00:24:13.781 Fixed Capacity Management: Not Supported 00:24:13.781 Variable Capacity Management: Not Supported 00:24:13.781 Delete Endurance Group: Not Supported 00:24:13.781 Delete NVM Set: Not Supported 00:24:13.781 Extended LBA Formats Supported: Not Supported 00:24:13.781 Flexible Data Placement Supported: Not Supported 00:24:13.781 00:24:13.781 Controller Memory Buffer Support 00:24:13.781 ================================ 00:24:13.781 Supported: No 00:24:13.781 00:24:13.781 Persistent Memory Region Support 00:24:13.781 ================================ 00:24:13.781 Supported: No 00:24:13.781 00:24:13.781 Admin Command Set Attributes 00:24:13.781 ============================ 00:24:13.781 Security Send/Receive: Not Supported 00:24:13.781 Format NVM: Not Supported 00:24:13.781 Firmware Activate/Download: Not Supported 00:24:13.781 Namespace Management: Not Supported 00:24:13.781 Device Self-Test: Not Supported 00:24:13.781 Directives: Not Supported 00:24:13.781 NVMe-MI: Not Supported 00:24:13.781 Virtualization Management: Not Supported 00:24:13.781 Doorbell Buffer Config: Not Supported 00:24:13.781 Get LBA Status Capability: Not Supported 00:24:13.781 Command & Feature Lockdown Capability: Not Supported 00:24:13.781 Abort Command Limit: 4 00:24:13.781 Async Event Request Limit: 4 00:24:13.781 Number of Firmware Slots: N/A 00:24:13.781 Firmware Slot 1 Read-Only: N/A 00:24:13.781 Firmware Activation Without Reset: N/A 00:24:13.781 Multiple Update Detection Support: N/A 00:24:13.781 Firmware Update Granularity: No Information Provided 00:24:13.781 Per-Namespace SMART Log: Yes 00:24:13.781 Asymmetric Namespace Access Log Page: Supported 00:24:13.781 ANA Transition Time : 10 sec 00:24:13.781 00:24:13.781 Asymmetric Namespace Access Capabilities 00:24:13.781 ANA Optimized State : Supported 00:24:13.781 ANA Non-Optimized State : Supported 00:24:13.781 ANA Inaccessible State : Supported 00:24:13.781 ANA Persistent Loss State : Supported 00:24:13.781 ANA Change State : Supported 00:24:13.781 ANAGRPID is not changed : No 00:24:13.781 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:13.781 00:24:13.781 ANA Group Identifier Maximum : 128 00:24:13.781 Number of ANA Group Identifiers : 128 00:24:13.781 Max Number of Allowed Namespaces : 1024 00:24:13.781 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:13.781 Command Effects Log Page: Supported 00:24:13.781 Get Log Page Extended Data: Supported 00:24:13.781 Telemetry Log Pages: Not Supported 00:24:13.781 Persistent Event Log Pages: Not Supported 00:24:13.781 Supported Log Pages Log Page: May Support 00:24:13.781 Commands Supported & Effects Log Page: Not Supported 00:24:13.781 Feature Identifiers & Effects Log Page:May Support 00:24:13.781 NVMe-MI Commands & Effects Log Page: May Support 00:24:13.781 Data Area 4 for Telemetry Log: Not Supported 00:24:13.781 Error Log Page Entries Supported: 128 00:24:13.781 Keep Alive: Supported 00:24:13.781 Keep Alive Granularity: 1000 ms 00:24:13.781 00:24:13.781 NVM Command Set Attributes 00:24:13.781 ========================== 00:24:13.781 Submission Queue Entry Size 00:24:13.781 Max: 64 00:24:13.781 Min: 64 00:24:13.781 Completion Queue Entry Size 00:24:13.781 Max: 16 00:24:13.781 Min: 16 00:24:13.781 Number of Namespaces: 1024 00:24:13.781 Compare Command: Not Supported 00:24:13.781 Write Uncorrectable Command: Not Supported 00:24:13.781 Dataset Management Command: Supported 00:24:13.781 Write Zeroes Command: Supported 00:24:13.781 Set Features Save Field: Not Supported 00:24:13.781 Reservations: Not Supported 00:24:13.782 Timestamp: Not Supported 00:24:13.782 Copy: Not Supported 00:24:13.782 Volatile Write Cache: Present 00:24:13.782 Atomic Write Unit (Normal): 1 00:24:13.782 Atomic Write Unit (PFail): 1 00:24:13.782 Atomic Compare & Write Unit: 1 00:24:13.782 Fused Compare & Write: Not Supported 00:24:13.782 Scatter-Gather List 00:24:13.782 SGL Command Set: Supported 00:24:13.782 SGL Keyed: Not Supported 00:24:13.782 SGL Bit Bucket Descriptor: Not Supported 00:24:13.782 SGL Metadata Pointer: Not Supported 00:24:13.782 Oversized SGL: Not Supported 00:24:13.782 SGL Metadata Address: Not Supported 00:24:13.782 SGL Offset: Supported 00:24:13.782 Transport SGL Data Block: Not Supported 00:24:13.782 Replay Protected Memory Block: Not Supported 00:24:13.782 00:24:13.782 Firmware Slot Information 00:24:13.782 ========================= 00:24:13.782 Active slot: 0 00:24:13.782 00:24:13.782 Asymmetric Namespace Access 00:24:13.782 =========================== 00:24:13.782 Change Count : 0 00:24:13.782 Number of ANA Group Descriptors : 1 00:24:13.782 ANA Group Descriptor : 0 00:24:13.782 ANA Group ID : 1 00:24:13.782 Number of NSID Values : 1 00:24:13.782 Change Count : 0 00:24:13.782 ANA State : 1 00:24:13.782 Namespace Identifier : 1 00:24:13.782 00:24:13.782 Commands Supported and Effects 00:24:13.782 ============================== 00:24:13.782 Admin Commands 00:24:13.782 -------------- 00:24:13.782 Get Log Page (02h): Supported 00:24:13.782 Identify (06h): Supported 00:24:13.782 Abort (08h): Supported 00:24:13.782 Set Features (09h): Supported 00:24:13.782 Get Features (0Ah): Supported 00:24:13.782 Asynchronous Event Request (0Ch): Supported 00:24:13.782 Keep Alive (18h): Supported 00:24:13.782 I/O Commands 00:24:13.782 ------------ 00:24:13.782 Flush (00h): Supported 00:24:13.782 Write (01h): Supported LBA-Change 00:24:13.782 Read (02h): Supported 00:24:13.782 Write Zeroes (08h): Supported LBA-Change 00:24:13.782 Dataset Management (09h): Supported 00:24:13.782 00:24:13.782 Error Log 00:24:13.782 ========= 00:24:13.782 Entry: 0 00:24:13.782 Error Count: 0x3 00:24:13.782 Submission Queue Id: 0x0 00:24:13.782 Command Id: 0x5 00:24:13.782 Phase Bit: 0 00:24:13.782 Status Code: 0x2 00:24:13.782 Status Code Type: 0x0 00:24:13.782 Do Not Retry: 1 00:24:13.782 Error Location: 0x28 00:24:13.782 LBA: 0x0 00:24:13.782 Namespace: 0x0 00:24:13.782 Vendor Log Page: 0x0 00:24:13.782 ----------- 00:24:13.782 Entry: 1 00:24:13.782 Error Count: 0x2 00:24:13.782 Submission Queue Id: 0x0 00:24:13.782 Command Id: 0x5 00:24:13.782 Phase Bit: 0 00:24:13.782 Status Code: 0x2 00:24:13.782 Status Code Type: 0x0 00:24:13.782 Do Not Retry: 1 00:24:13.782 Error Location: 0x28 00:24:13.782 LBA: 0x0 00:24:13.782 Namespace: 0x0 00:24:13.782 Vendor Log Page: 0x0 00:24:13.782 ----------- 00:24:13.782 Entry: 2 00:24:13.782 Error Count: 0x1 00:24:13.782 Submission Queue Id: 0x0 00:24:13.782 Command Id: 0x4 00:24:13.782 Phase Bit: 0 00:24:13.782 Status Code: 0x2 00:24:13.782 Status Code Type: 0x0 00:24:13.782 Do Not Retry: 1 00:24:13.782 Error Location: 0x28 00:24:13.782 LBA: 0x0 00:24:13.782 Namespace: 0x0 00:24:13.782 Vendor Log Page: 0x0 00:24:13.782 00:24:13.782 Number of Queues 00:24:13.782 ================ 00:24:13.782 Number of I/O Submission Queues: 128 00:24:13.782 Number of I/O Completion Queues: 128 00:24:13.782 00:24:13.782 ZNS Specific Controller Data 00:24:13.782 ============================ 00:24:13.782 Zone Append Size Limit: 0 00:24:13.782 00:24:13.782 00:24:13.782 Active Namespaces 00:24:13.782 ================= 00:24:13.782 get_feature(0x05) failed 00:24:13.782 Namespace ID:1 00:24:13.782 Command Set Identifier: NVM (00h) 00:24:13.782 Deallocate: Supported 00:24:13.782 Deallocated/Unwritten Error: Not Supported 00:24:13.782 Deallocated Read Value: Unknown 00:24:13.782 Deallocate in Write Zeroes: Not Supported 00:24:13.782 Deallocated Guard Field: 0xFFFF 00:24:13.782 Flush: Supported 00:24:13.782 Reservation: Not Supported 00:24:13.782 Namespace Sharing Capabilities: Multiple Controllers 00:24:13.782 Size (in LBAs): 1310720 (5GiB) 00:24:13.782 Capacity (in LBAs): 1310720 (5GiB) 00:24:13.782 Utilization (in LBAs): 1310720 (5GiB) 00:24:13.782 UUID: 44d139c0-d95d-45ba-8607-fdf6e604845f 00:24:13.782 Thin Provisioning: Not Supported 00:24:13.782 Per-NS Atomic Units: Yes 00:24:13.782 Atomic Boundary Size (Normal): 0 00:24:13.782 Atomic Boundary Size (PFail): 0 00:24:13.782 Atomic Boundary Offset: 0 00:24:13.782 NGUID/EUI64 Never Reused: No 00:24:13.782 ANA group ID: 1 00:24:13.782 Namespace Write Protected: No 00:24:13.782 Number of LBA Formats: 1 00:24:13.782 Current LBA Format: LBA Format #00 00:24:13.782 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:24:13.782 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.782 rmmod nvme_tcp 00:24:13.782 rmmod nvme_fabrics 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.782 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:14.040 12:35:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:14.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.866 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:14.866 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:14.866 00:24:14.866 real 0m2.842s 00:24:14.866 user 0m0.990s 00:24:14.866 sys 0m1.364s 00:24:14.866 12:35:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.866 12:35:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.867 ************************************ 00:24:14.867 END TEST nvmf_identify_kernel_target 00:24:14.867 ************************************ 00:24:14.867 12:35:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.867 12:35:43 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:14.867 12:35:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.867 12:35:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.867 12:35:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.867 ************************************ 00:24:14.867 START TEST nvmf_auth_host 00:24:14.867 ************************************ 00:24:14.867 12:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:14.867 * Looking for test storage... 00:24:15.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:15.125 12:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:15.125 Cannot find device "nvmf_tgt_br" 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:15.125 Cannot find device "nvmf_tgt_br2" 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:15.125 Cannot find device "nvmf_tgt_br" 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:15.125 Cannot find device "nvmf_tgt_br2" 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:15.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:15.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:15.125 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:15.126 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:15.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:24:15.383 00:24:15.383 --- 10.0.0.2 ping statistics --- 00:24:15.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.383 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:15.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:15.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:24:15.383 00:24:15.383 --- 10.0.0.3 ping statistics --- 00:24:15.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.383 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:15.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:24:15.383 00:24:15.383 --- 10.0.0.1 ping statistics --- 00:24:15.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.383 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:24:15.383 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=93435 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 93435 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 93435 ']' 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.384 12:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.315 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.315 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:16.315 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.315 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.315 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=18ebf749d9b481c9d9240f6243cc4635 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FHT 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 18ebf749d9b481c9d9240f6243cc4635 0 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 18ebf749d9b481c9d9240f6243cc4635 0 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=18ebf749d9b481c9d9240f6243cc4635 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FHT 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FHT 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FHT 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e14e473f12f9b91e2906aa82bd1fc45bf49fb1976ce2bf0f6ef87218439d4569 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.b6A 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e14e473f12f9b91e2906aa82bd1fc45bf49fb1976ce2bf0f6ef87218439d4569 3 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e14e473f12f9b91e2906aa82bd1fc45bf49fb1976ce2bf0f6ef87218439d4569 3 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e14e473f12f9b91e2906aa82bd1fc45bf49fb1976ce2bf0f6ef87218439d4569 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.b6A 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.b6A 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.b6A 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c4c6775e98bf269a6e365553174038d360edd20ffb5ca82f 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aJa 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c4c6775e98bf269a6e365553174038d360edd20ffb5ca82f 0 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c4c6775e98bf269a6e365553174038d360edd20ffb5ca82f 0 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c4c6775e98bf269a6e365553174038d360edd20ffb5ca82f 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aJa 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aJa 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aJa 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e231eaaaab0770be306b001f886d2a0cc794503202934d6 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.BSG 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e231eaaaab0770be306b001f886d2a0cc794503202934d6 2 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e231eaaaab0770be306b001f886d2a0cc794503202934d6 2 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e231eaaaab0770be306b001f886d2a0cc794503202934d6 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.BSG 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.BSG 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.BSG 00:24:16.573 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c701154c8a1491904c3d6de78282ae81 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HBD 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c701154c8a1491904c3d6de78282ae81 1 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c701154c8a1491904c3d6de78282ae81 1 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c701154c8a1491904c3d6de78282ae81 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:16.574 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HBD 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HBD 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HBD 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=24c76a14aa6395330b4e914adfa77460 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8rI 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 24c76a14aa6395330b4e914adfa77460 1 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 24c76a14aa6395330b4e914adfa77460 1 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.831 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=24c76a14aa6395330b4e914adfa77460 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8rI 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8rI 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8rI 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d17a373fa8f94ed2231f2ff2a905538f926bc20ba0fcc7ae 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.N5a 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d17a373fa8f94ed2231f2ff2a905538f926bc20ba0fcc7ae 2 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d17a373fa8f94ed2231f2ff2a905538f926bc20ba0fcc7ae 2 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d17a373fa8f94ed2231f2ff2a905538f926bc20ba0fcc7ae 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.N5a 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.N5a 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.N5a 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c9ce05255ddbc44438e891ac1340f60a 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5hb 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c9ce05255ddbc44438e891ac1340f60a 0 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c9ce05255ddbc44438e891ac1340f60a 0 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c9ce05255ddbc44438e891ac1340f60a 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5hb 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5hb 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5hb 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f861daeda3ec61a59420d68d2cda443eb6af6cfb44caae2cc75ed4c83190ec0 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qba 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f861daeda3ec61a59420d68d2cda443eb6af6cfb44caae2cc75ed4c83190ec0 3 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f861daeda3ec61a59420d68d2cda443eb6af6cfb44caae2cc75ed4c83190ec0 3 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f861daeda3ec61a59420d68d2cda443eb6af6cfb44caae2cc75ed4c83190ec0 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:16.832 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qba 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qba 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qba 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93435 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 93435 ']' 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.090 12:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FHT 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.b6A ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b6A 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aJa 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.BSG ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BSG 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HBD 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8rI ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8rI 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.N5a 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5hb ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5hb 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qba 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:17.348 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:17.349 12:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:17.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.655 Waiting for block devices as requested 00:24:17.655 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:17.912 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:18.476 No valid GPT data, bailing 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:18.476 No valid GPT data, bailing 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:18.476 No valid GPT data, bailing 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:18.476 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:18.733 No valid GPT data, bailing 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:18.733 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -a 10.0.0.1 -t tcp -s 4420 00:24:18.734 00:24:18.734 Discovery Log Number of Records 2, Generation counter 2 00:24:18.734 =====Discovery Log Entry 0====== 00:24:18.734 trtype: tcp 00:24:18.734 adrfam: ipv4 00:24:18.734 subtype: current discovery subsystem 00:24:18.734 treq: not specified, sq flow control disable supported 00:24:18.734 portid: 1 00:24:18.734 trsvcid: 4420 00:24:18.734 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:18.734 traddr: 10.0.0.1 00:24:18.734 eflags: none 00:24:18.734 sectype: none 00:24:18.734 =====Discovery Log Entry 1====== 00:24:18.734 trtype: tcp 00:24:18.734 adrfam: ipv4 00:24:18.734 subtype: nvme subsystem 00:24:18.734 treq: not specified, sq flow control disable supported 00:24:18.734 portid: 1 00:24:18.734 trsvcid: 4420 00:24:18.734 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:18.734 traddr: 10.0.0.1 00:24:18.734 eflags: none 00:24:18.734 sectype: none 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.734 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.991 nvme0n1 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.991 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.992 12:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.992 nvme0n1 00:24:18.992 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.992 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.992 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.992 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.992 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.992 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.250 nvme0n1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.250 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 nvme0n1 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 nvme0n1 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.508 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 nvme0n1 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.766 12:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.024 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 nvme0n1 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.283 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.540 nvme0n1 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 nvme0n1 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.541 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.799 nvme0n1 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.799 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.058 nvme0n1 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.058 12:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.058 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.647 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.648 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.648 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.648 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.648 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.648 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.648 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.906 nvme0n1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.906 12:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.164 nvme0n1 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.164 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.421 nvme0n1 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.421 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.678 nvme0n1 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:22.678 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.679 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.937 nvme0n1 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.937 12:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.836 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.094 nvme0n1 00:24:25.094 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.094 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.094 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.094 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.094 12:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.094 12:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.094 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.351 nvme0n1 00:24:25.351 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.351 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.351 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.351 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.351 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.351 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.609 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.868 nvme0n1 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.868 12:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.434 nvme0n1 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.434 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.435 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.435 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.435 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.693 nvme0n1 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.693 12:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.258 nvme0n1 00:24:27.258 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.258 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.258 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.258 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.258 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:27.515 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.516 12:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.080 nvme0n1 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.080 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.081 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 nvme0n1 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.742 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.743 12:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.307 nvme0n1 00:24:29.307 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.307 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.307 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.307 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.307 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.307 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.563 12:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.127 nvme0n1 00:24:30.127 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.127 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.127 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.127 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.128 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.385 nvme0n1 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:30.385 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.386 nvme0n1 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.386 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.644 nvme0n1 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.644 nvme0n1 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.644 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.903 nvme0n1 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.903 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.904 12:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.161 nvme0n1 00:24:31.161 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.161 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.161 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.161 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.162 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.420 nvme0n1 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.420 nvme0n1 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.420 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.690 nvme0n1 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.690 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.955 nvme0n1 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.955 12:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 nvme0n1 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.213 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.214 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.471 nvme0n1 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.471 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.771 nvme0n1 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.771 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.028 nvme0n1 00:24:33.028 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.028 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.028 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.028 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.028 12:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.028 12:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.028 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.286 nvme0n1 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.286 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.851 nvme0n1 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.851 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.852 12:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.111 nvme0n1 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.111 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.112 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.677 nvme0n1 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.677 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.678 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.935 nvme0n1 00:24:34.935 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.935 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.935 12:36:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.935 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.935 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.935 12:36:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.192 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.193 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.450 nvme0n1 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.450 12:36:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.379 nvme0n1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.379 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.944 nvme0n1 00:24:36.944 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.944 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.944 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.945 12:36:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.511 nvme0n1 00:24:37.511 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.511 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.511 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.511 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.511 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.511 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.769 12:36:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.354 nvme0n1 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.354 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.355 12:36:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.289 nvme0n1 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.289 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.290 nvme0n1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.290 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.549 nvme0n1 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.549 nvme0n1 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.549 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.808 nvme0n1 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.808 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.066 nvme0n1 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.066 12:36:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.066 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.067 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 nvme0n1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 nvme0n1 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.583 nvme0n1 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.583 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 nvme0n1 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.842 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.100 12:36:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.100 nvme0n1 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:41.100 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.101 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.359 nvme0n1 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.359 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.360 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.617 nvme0n1 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.617 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.875 nvme0n1 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.875 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.876 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.876 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.876 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.876 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.876 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.133 12:36:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 nvme0n1 00:24:42.133 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.133 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.133 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.133 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.133 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.133 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.391 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.392 nvme0n1 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.392 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.650 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.651 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.909 nvme0n1 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:42.909 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.910 12:36:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.478 nvme0n1 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.478 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.479 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.479 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.479 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.737 nvme0n1 00:24:43.737 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.737 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.737 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.737 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.737 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.737 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.995 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.996 12:36:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.254 nvme0n1 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.254 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.255 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.821 nvme0n1 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.821 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MThlYmY3NDlkOWI0ODFjOWQ5MjQwZjYyNDNjYzQ2MzUc7jN+: 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: ]] 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTE0ZTQ3M2YxMmY5YjkxZTI5MDZhYTgyYmQxZmM0NWJmNDlmYjE5NzZjZTJiZjBmNmVmODcyMTg0MzlkNDU2OYo9XIA=: 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.822 12:36:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.390 nvme0n1 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.390 12:36:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.648 12:36:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.648 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.648 12:36:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.215 nvme0n1 00:24:46.215 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcwMTE1NGM4YTE0OTE5MDRjM2Q2ZGU3ODI4MmFlODH+NWCx: 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjRjNzZhMTRhYTYzOTUzMzBiNGU5MTRhZGZhNzc0NjBU13X0: 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.216 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.782 nvme0n1 00:24:46.782 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.782 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.782 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.782 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.782 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.782 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDE3YTM3M2ZhOGY5NGVkMjIzMWYyZmYyYTkwNTUzOGY5MjZiYzIwYmEwZmNjN2Fl9yfaOg==: 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: ]] 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzljZTA1MjU1ZGRiYzQ0NDM4ZTg5MWFjMTM0MGY2MGH1U/V5: 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.039 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.040 12:36:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 nvme0n1 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:47.606 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWY4NjFkYWVkYTNlYzYxYTU5NDIwZDY4ZDJjZGE0NDNlYjZhZjZjZmI0NGNhYWUyY2M3NWVkNGM4MzE5MGVjMCuPVEk=: 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.607 12:36:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 nvme0n1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRjNjc3NWU5OGJmMjY5YTZlMzY1NTUzMTc0MDM4ZDM2MGVkZDIwZmZiNWNhODJmCvcVSA==: 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGUyMzFlYWFhYWIwNzcwYmUzMDZiMDAxZjg4NmQyYTBjYzc5NDUwMzIwMjkzNGQ2EbHVQg==: 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 request: 00:24:48.562 { 00:24:48.562 "name": "nvme0", 00:24:48.562 "trtype": "tcp", 00:24:48.562 "traddr": "10.0.0.1", 00:24:48.562 "adrfam": "ipv4", 00:24:48.562 "trsvcid": "4420", 00:24:48.562 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:48.562 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:48.562 "prchk_reftag": false, 00:24:48.562 "prchk_guard": false, 00:24:48.562 "hdgst": false, 00:24:48.562 "ddgst": false, 00:24:48.562 "method": "bdev_nvme_attach_controller", 00:24:48.562 "req_id": 1 00:24:48.562 } 00:24:48.562 Got JSON-RPC error response 00:24:48.562 response: 00:24:48.562 { 00:24:48.562 "code": -5, 00:24:48.562 "message": "Input/output error" 00:24:48.562 } 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 request: 00:24:48.562 { 00:24:48.562 "name": "nvme0", 00:24:48.562 "trtype": "tcp", 00:24:48.562 "traddr": "10.0.0.1", 00:24:48.562 "adrfam": "ipv4", 00:24:48.562 "trsvcid": "4420", 00:24:48.562 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:48.562 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:48.562 "prchk_reftag": false, 00:24:48.562 "prchk_guard": false, 00:24:48.562 "hdgst": false, 00:24:48.562 "ddgst": false, 00:24:48.562 "dhchap_key": "key2", 00:24:48.562 "method": "bdev_nvme_attach_controller", 00:24:48.562 "req_id": 1 00:24:48.562 } 00:24:48.562 Got JSON-RPC error response 00:24:48.562 response: 00:24:48.562 { 00:24:48.562 "code": -5, 00:24:48.562 "message": "Input/output error" 00:24:48.562 } 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.562 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.563 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.563 request: 00:24:48.563 { 00:24:48.563 "name": "nvme0", 00:24:48.563 "trtype": "tcp", 00:24:48.563 "traddr": "10.0.0.1", 00:24:48.563 "adrfam": "ipv4", 00:24:48.563 "trsvcid": "4420", 00:24:48.563 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:48.563 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:48.563 "prchk_reftag": false, 00:24:48.563 "prchk_guard": false, 00:24:48.563 "hdgst": false, 00:24:48.563 "ddgst": false, 00:24:48.563 "dhchap_key": "key1", 00:24:48.563 "dhchap_ctrlr_key": "ckey2", 00:24:48.563 "method": "bdev_nvme_attach_controller", 00:24:48.563 "req_id": 1 00:24:48.563 } 00:24:48.563 Got JSON-RPC error response 00:24:48.563 response: 00:24:48.820 { 00:24:48.820 "code": -5, 00:24:48.820 "message": "Input/output error" 00:24:48.820 } 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:48.820 rmmod nvme_tcp 00:24:48.820 rmmod nvme_fabrics 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 93435 ']' 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 93435 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 93435 ']' 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 93435 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93435 00:24:48.820 killing process with pid 93435 00:24:48.820 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:48.821 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:48.821 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93435' 00:24:48.821 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 93435 00:24:48.821 12:36:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 93435 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.079 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:49.337 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:49.337 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.337 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:49.337 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:49.337 12:36:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:49.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:49.905 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:50.163 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:50.163 12:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FHT /tmp/spdk.key-null.aJa /tmp/spdk.key-sha256.HBD /tmp/spdk.key-sha384.N5a /tmp/spdk.key-sha512.qba /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:50.163 12:36:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:50.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:50.421 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.421 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:50.680 00:24:50.680 real 0m35.644s 00:24:50.680 user 0m31.902s 00:24:50.680 sys 0m3.594s 00:24:50.680 12:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.680 ************************************ 00:24:50.680 END TEST nvmf_auth_host 00:24:50.680 ************************************ 00:24:50.680 12:36:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.680 12:36:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.680 12:36:19 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:50.680 12:36:19 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:50.680 12:36:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.680 12:36:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.680 12:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.680 ************************************ 00:24:50.680 START TEST nvmf_digest 00:24:50.680 ************************************ 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:50.680 * Looking for test storage... 00:24:50.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.680 12:36:19 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:50.681 Cannot find device "nvmf_tgt_br" 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.681 Cannot find device "nvmf_tgt_br2" 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:50.681 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:50.940 Cannot find device "nvmf_tgt_br" 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:50.940 Cannot find device "nvmf_tgt_br2" 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.940 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:50.940 12:36:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:50.940 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:51.198 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:51.198 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:51.198 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:51.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:24:51.198 00:24:51.198 --- 10.0.0.2 ping statistics --- 00:24:51.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.199 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:51.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:51.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:24:51.199 00:24:51.199 --- 10.0.0.3 ping statistics --- 00:24:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.199 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:51.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:24:51.199 00:24:51.199 --- 10.0.0.1 ping statistics --- 00:24:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.199 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:51.199 ************************************ 00:24:51.199 START TEST nvmf_digest_clean 00:24:51.199 ************************************ 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=95006 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 95006 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 95006 ']' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.199 12:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:51.199 [2024-07-12 12:36:20.157500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:51.199 [2024-07-12 12:36:20.157611] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.457 [2024-07-12 12:36:20.303416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.457 [2024-07-12 12:36:20.439611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.457 [2024-07-12 12:36:20.439681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.457 [2024-07-12 12:36:20.439696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.457 [2024-07-12 12:36:20.439707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.457 [2024-07-12 12:36:20.439716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.457 [2024-07-12 12:36:20.439757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:52.405 [2024-07-12 12:36:21.317363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:52.405 null0 00:24:52.405 [2024-07-12 12:36:21.380108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.405 [2024-07-12 12:36:21.404234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95044 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95044 /var/tmp/bperf.sock 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 95044 ']' 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.405 12:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:52.405 [2024-07-12 12:36:21.471228] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:52.405 [2024-07-12 12:36:21.471362] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95044 ] 00:24:52.688 [2024-07-12 12:36:21.613809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.688 [2024-07-12 12:36:21.732560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.624 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.624 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:53.624 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:53.624 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:53.624 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:53.882 [2024-07-12 12:36:22.846516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:53.882 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.882 12:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.139 nvme0n1 00:24:54.397 12:36:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:54.397 12:36:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:54.397 Running I/O for 2 seconds... 00:24:56.311 00:24:56.311 Latency(us) 00:24:56.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.311 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:56.311 nvme0n1 : 2.01 15099.94 58.98 0.00 0.00 8469.13 7923.90 23354.65 00:24:56.311 =================================================================================================================== 00:24:56.311 Total : 15099.94 58.98 0.00 0.00 8469.13 7923.90 23354.65 00:24:56.311 0 00:24:56.311 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:56.311 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:56.311 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:56.311 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:56.311 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:56.311 | select(.opcode=="crc32c") 00:24:56.311 | "\(.module_name) \(.executed)"' 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95044 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 95044 ']' 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 95044 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95044 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:56.876 killing process with pid 95044 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95044' 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 95044 00:24:56.876 Received shutdown signal, test time was about 2.000000 seconds 00:24:56.876 00:24:56.876 Latency(us) 00:24:56.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.876 =================================================================================================================== 00:24:56.876 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 95044 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95099 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95099 /var/tmp/bperf.sock 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 95099 ']' 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.876 12:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.876 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:56.876 Zero copy mechanism will not be used. 00:24:56.876 [2024-07-12 12:36:25.948142] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:56.876 [2024-07-12 12:36:25.948236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95099 ] 00:24:57.134 [2024-07-12 12:36:26.081291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.134 [2024-07-12 12:36:26.173504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.068 12:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.068 12:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:58.068 12:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:58.068 12:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.069 12:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:58.069 [2024-07-12 12:36:27.136495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:58.326 12:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.326 12:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.585 nvme0n1 00:24:58.585 12:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:58.585 12:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:58.585 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.585 Zero copy mechanism will not be used. 00:24:58.585 Running I/O for 2 seconds... 00:25:00.558 00:25:00.558 Latency(us) 00:25:00.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:00.558 nvme0n1 : 2.00 7574.51 946.81 0.00 0.00 2108.75 1861.82 9472.93 00:25:00.558 =================================================================================================================== 00:25:00.558 Total : 7574.51 946.81 0.00 0.00 2108.75 1861.82 9472.93 00:25:00.558 0 00:25:00.558 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:00.558 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:00.558 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:00.558 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:00.558 | select(.opcode=="crc32c") 00:25:00.558 | "\(.module_name) \(.executed)"' 00:25:00.558 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95099 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 95099 ']' 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 95099 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:00.815 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95099 00:25:01.073 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:01.073 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:01.073 killing process with pid 95099 00:25:01.073 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95099' 00:25:01.073 Received shutdown signal, test time was about 2.000000 seconds 00:25:01.073 00:25:01.073 Latency(us) 00:25:01.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.073 =================================================================================================================== 00:25:01.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.073 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 95099 00:25:01.073 12:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 95099 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95159 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95159 /var/tmp/bperf.sock 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 95159 ']' 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.073 12:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:01.329 [2024-07-12 12:36:30.157831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:01.329 [2024-07-12 12:36:30.157916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95159 ] 00:25:01.329 [2024-07-12 12:36:30.292312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.329 [2024-07-12 12:36:30.384329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.260 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.260 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:02.260 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:02.260 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:02.260 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:02.537 [2024-07-12 12:36:31.406891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:02.537 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.537 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.794 nvme0n1 00:25:02.794 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:02.794 12:36:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:02.794 Running I/O for 2 seconds... 00:25:05.317 00:25:05.317 Latency(us) 00:25:05.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.317 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:05.317 nvme0n1 : 2.00 15988.00 62.45 0.00 0.00 7999.03 2606.55 16443.58 00:25:05.317 =================================================================================================================== 00:25:05.317 Total : 15988.00 62.45 0.00 0.00 7999.03 2606.55 16443.58 00:25:05.317 0 00:25:05.317 12:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:05.317 12:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:05.317 12:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:05.317 12:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:05.317 | select(.opcode=="crc32c") 00:25:05.317 | "\(.module_name) \(.executed)"' 00:25:05.318 12:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95159 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 95159 ']' 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 95159 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95159 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:05.318 killing process with pid 95159 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95159' 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 95159 00:25:05.318 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.318 00:25:05.318 Latency(us) 00:25:05.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.318 =================================================================================================================== 00:25:05.318 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.318 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 95159 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95215 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95215 /var/tmp/bperf.sock 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 95215 ']' 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.576 12:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:05.576 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.576 Zero copy mechanism will not be used. 00:25:05.576 [2024-07-12 12:36:34.472563] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:05.576 [2024-07-12 12:36:34.472681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95215 ] 00:25:05.576 [2024-07-12 12:36:34.611219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.834 [2024-07-12 12:36:34.707517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.400 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.400 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:06.400 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:06.400 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.400 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.658 [2024-07-12 12:36:35.729296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:06.917 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.917 12:36:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.175 nvme0n1 00:25:07.175 12:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:07.175 12:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.432 Zero copy mechanism will not be used. 00:25:07.432 Running I/O for 2 seconds... 00:25:09.332 00:25:09.332 Latency(us) 00:25:09.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.332 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:09.332 nvme0n1 : 2.00 5891.50 736.44 0.00 0.00 2709.67 2010.76 8757.99 00:25:09.332 =================================================================================================================== 00:25:09.332 Total : 5891.50 736.44 0.00 0.00 2709.67 2010.76 8757.99 00:25:09.332 0 00:25:09.332 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:09.332 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:09.332 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:09.332 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:09.332 | select(.opcode=="crc32c") 00:25:09.332 | "\(.module_name) \(.executed)"' 00:25:09.332 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95215 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 95215 ']' 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 95215 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95215 00:25:09.590 killing process with pid 95215 00:25:09.590 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.590 00:25:09.590 Latency(us) 00:25:09.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.590 =================================================================================================================== 00:25:09.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95215' 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 95215 00:25:09.590 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 95215 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95006 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 95006 ']' 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 95006 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95006 00:25:09.848 killing process with pid 95006 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95006' 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 95006 00:25:09.848 12:36:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 95006 00:25:10.106 00:25:10.106 real 0m19.029s 00:25:10.106 user 0m36.898s 00:25:10.106 sys 0m4.858s 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:10.106 ************************************ 00:25:10.106 END TEST nvmf_digest_clean 00:25:10.106 ************************************ 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:10.106 ************************************ 00:25:10.106 START TEST nvmf_digest_error 00:25:10.106 ************************************ 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=95304 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 95304 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95304 ']' 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.106 12:36:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.365 [2024-07-12 12:36:39.254315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:10.365 [2024-07-12 12:36:39.254425] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.365 [2024-07-12 12:36:39.394531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.623 [2024-07-12 12:36:39.492734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.623 [2024-07-12 12:36:39.492824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.623 [2024-07-12 12:36:39.492840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.623 [2024-07-12 12:36:39.492851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.623 [2024-07-12 12:36:39.492875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.623 [2024-07-12 12:36:39.492905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 [2024-07-12 12:36:40.345486] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 [2024-07-12 12:36:40.409031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:11.556 null0 00:25:11.556 [2024-07-12 12:36:40.457942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.556 [2024-07-12 12:36:40.482062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95336 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95336 /var/tmp/bperf.sock 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95336 ']' 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:11.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:11.556 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.557 12:36:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.557 [2024-07-12 12:36:40.548807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:11.557 [2024-07-12 12:36:40.549202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95336 ] 00:25:11.814 [2024-07-12 12:36:40.687992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.814 [2024-07-12 12:36:40.789303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.814 [2024-07-12 12:36:40.845920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:12.747 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.747 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:12.747 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:12.747 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.005 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:13.005 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.005 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.005 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.005 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.005 12:36:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.263 nvme0n1 00:25:13.263 12:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:13.263 12:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.263 12:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.263 12:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.263 12:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:13.263 12:36:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.263 Running I/O for 2 seconds... 00:25:13.263 [2024-07-12 12:36:42.265061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.263 [2024-07-12 12:36:42.265118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.263 [2024-07-12 12:36:42.265135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.263 [2024-07-12 12:36:42.282390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.263 [2024-07-12 12:36:42.282456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.263 [2024-07-12 12:36:42.282472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.263 [2024-07-12 12:36:42.300134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.263 [2024-07-12 12:36:42.300216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.263 [2024-07-12 12:36:42.300232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.263 [2024-07-12 12:36:42.317852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.263 [2024-07-12 12:36:42.317911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.263 [2024-07-12 12:36:42.317926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.263 [2024-07-12 12:36:42.335234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.263 [2024-07-12 12:36:42.335296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.263 [2024-07-12 12:36:42.335312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.352693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.352745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.352760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.370146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.370193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.370208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.389281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.389361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.389383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.408918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.408998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.409020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.428483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.428560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.428583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.446726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.446797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.446814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.464244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.464297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.464313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.481886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.481942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.481958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.499490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.499543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.499559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.516679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.516754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.516773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.534229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.534304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.534329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.551894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.551946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.551962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.569484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.569538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.569553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.522 [2024-07-12 12:36:42.587354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.522 [2024-07-12 12:36:42.587413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.522 [2024-07-12 12:36:42.587429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.604877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.604942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.604961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.622460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.622508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.622524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.639876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.639925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.639940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.657236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.657289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.657304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.674639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.674696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.692342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.692397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.692412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.710311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.710365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.710381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.780 [2024-07-12 12:36:42.728281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.780 [2024-07-12 12:36:42.728337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.780 [2024-07-12 12:36:42.728352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.781 [2024-07-12 12:36:42.745776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.781 [2024-07-12 12:36:42.745846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.781 [2024-07-12 12:36:42.745872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.781 [2024-07-12 12:36:42.765542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.781 [2024-07-12 12:36:42.765617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.781 [2024-07-12 12:36:42.765640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.781 [2024-07-12 12:36:42.784869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.781 [2024-07-12 12:36:42.784949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.781 [2024-07-12 12:36:42.784974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.781 [2024-07-12 12:36:42.804253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.781 [2024-07-12 12:36:42.804341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.781 [2024-07-12 12:36:42.804364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.781 [2024-07-12 12:36:42.823508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.781 [2024-07-12 12:36:42.823577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.781 [2024-07-12 12:36:42.823599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.781 [2024-07-12 12:36:42.842729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:13.781 [2024-07-12 12:36:42.842814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.781 [2024-07-12 12:36:42.842836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.861869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.861942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.879377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.879434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.879451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.896527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.896584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.896599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.913753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.913815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.913830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.931114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.931162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.931176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.948581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.948631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.948646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.966010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.966064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.040 [2024-07-12 12:36:42.983550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.040 [2024-07-12 12:36:42.983606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.040 [2024-07-12 12:36:42.983622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.000972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.001024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.001040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.018292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.018341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.018356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.035780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.035870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.035885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.053291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.053338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.053353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.070646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.070699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.070723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.088115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.088172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.088187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.041 [2024-07-12 12:36:43.105727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.041 [2024-07-12 12:36:43.105794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.041 [2024-07-12 12:36:43.105810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.123152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.123230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.123254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.140702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.140753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.140772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.157966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.158015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.158029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.175489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.175540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.175555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.193035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.193084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.193098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.210693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.210751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.210766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.228665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.228741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.228757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.246402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.246452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.246467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.263970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.264056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.264089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.282653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.282728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.300966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.301038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.301054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.319085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.355 [2024-07-12 12:36:43.319162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.355 [2024-07-12 12:36:43.319179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.355 [2024-07-12 12:36:43.337317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.356 [2024-07-12 12:36:43.337420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.356 [2024-07-12 12:36:43.337436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.356 [2024-07-12 12:36:43.355623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.356 [2024-07-12 12:36:43.355718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.356 [2024-07-12 12:36:43.355735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.356 [2024-07-12 12:36:43.373697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.356 [2024-07-12 12:36:43.373796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.356 [2024-07-12 12:36:43.373836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.356 [2024-07-12 12:36:43.399136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.356 [2024-07-12 12:36:43.399196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.356 [2024-07-12 12:36:43.399211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.356 [2024-07-12 12:36:43.416403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.356 [2024-07-12 12:36:43.416456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.356 [2024-07-12 12:36:43.416472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.434737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.434797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.434813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.452975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.453023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.453038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.470758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.470815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.470831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.488570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.488616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.488630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.506434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.506483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.506499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.524477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.524538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.524552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.542178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.542227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.542242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.559819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.559864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.559878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.577403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.577451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.577465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.594865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.594910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.594925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.612587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.612648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.612663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.629979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.630040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.647142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.647180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.647195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.664387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.664439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.664454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.614 [2024-07-12 12:36:43.682014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.614 [2024-07-12 12:36:43.682078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.614 [2024-07-12 12:36:43.682093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.699689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.699754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.699769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.717216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.717277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.717296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.734738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.734801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.734818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.751976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.752026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.752043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.769240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.769293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.769308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.786637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.786692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.786707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.804025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.804075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.804090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.821425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.873 [2024-07-12 12:36:43.821486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.873 [2024-07-12 12:36:43.821501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.873 [2024-07-12 12:36:43.838629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.838687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.838706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.874 [2024-07-12 12:36:43.856151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.856199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.856213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.874 [2024-07-12 12:36:43.873659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.873707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.873722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.874 [2024-07-12 12:36:43.891398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.891451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.874 [2024-07-12 12:36:43.908793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.908841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.908856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.874 [2024-07-12 12:36:43.926153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.926203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.926218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.874 [2024-07-12 12:36:43.943605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:14.874 [2024-07-12 12:36:43.943656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.874 [2024-07-12 12:36:43.943671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.132 [2024-07-12 12:36:43.961045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.132 [2024-07-12 12:36:43.961100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.132 [2024-07-12 12:36:43.961115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:43.978911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:43.978966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:43.978981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:43.996934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:43.996985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:43.997000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.014446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.014541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.014556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.032311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.032365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.032381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.049964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.050014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.050030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.067620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.067667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.067682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.085163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.085212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.085227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.102642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.102715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.120099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.120147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.120162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.137436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.137502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.137516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.154889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.154952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.172502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.172547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.172561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.189957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.190007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.190021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.133 [2024-07-12 12:36:44.207452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.133 [2024-07-12 12:36:44.207498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.133 [2024-07-12 12:36:44.207512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.390 [2024-07-12 12:36:44.224757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.390 [2024-07-12 12:36:44.224822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.390 [2024-07-12 12:36:44.224837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.390 [2024-07-12 12:36:44.242257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x512b40) 00:25:15.390 [2024-07-12 12:36:44.242319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.390 [2024-07-12 12:36:44.242333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.390 00:25:15.390 Latency(us) 00:25:15.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.391 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:15.391 nvme0n1 : 2.01 14280.32 55.78 0.00 0.00 8956.64 8102.63 33840.41 00:25:15.391 =================================================================================================================== 00:25:15.391 Total : 14280.32 55.78 0.00 0.00 8956.64 8102.63 33840.41 00:25:15.391 0 00:25:15.391 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:15.391 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:15.391 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:15.391 | .driver_specific 00:25:15.391 | .nvme_error 00:25:15.391 | .status_code 00:25:15.391 | .command_transient_transport_error' 00:25:15.391 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95336 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95336 ']' 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95336 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95336 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:15.649 killing process with pid 95336 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95336' 00:25:15.649 Received shutdown signal, test time was about 2.000000 seconds 00:25:15.649 00:25:15.649 Latency(us) 00:25:15.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.649 =================================================================================================================== 00:25:15.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95336 00:25:15.649 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95336 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95397 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95397 /var/tmp/bperf.sock 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95397 ']' 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.908 12:36:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.908 Zero copy mechanism will not be used. 00:25:15.908 [2024-07-12 12:36:44.870061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:15.908 [2024-07-12 12:36:44.870174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95397 ] 00:25:16.166 [2024-07-12 12:36:45.007194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.166 [2024-07-12 12:36:45.111029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.166 [2024-07-12 12:36:45.170109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:17.100 12:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.100 12:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:17.100 12:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:17.100 12:36:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:17.358 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:17.358 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.358 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.358 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.358 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.358 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.616 nvme0n1 00:25:17.616 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:17.616 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.616 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.616 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.616 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:17.616 12:36:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.875 Zero copy mechanism will not be used. 00:25:17.875 Running I/O for 2 seconds... 00:25:17.875 [2024-07-12 12:36:46.733172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.875 [2024-07-12 12:36:46.733239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.875 [2024-07-12 12:36:46.733256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.875 [2024-07-12 12:36:46.737502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.875 [2024-07-12 12:36:46.737545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.875 [2024-07-12 12:36:46.737560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.875 [2024-07-12 12:36:46.742121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.875 [2024-07-12 12:36:46.742161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.875 [2024-07-12 12:36:46.742175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.746379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.746435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.746453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.750552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.750593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.750606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.754745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.754798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.754813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.759017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.759056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.759069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.763141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.763179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.763194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.767380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.767418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.767432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.771716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.771754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.771768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.776133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.776172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.776187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.780571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.780611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.780625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.784927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.784965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.784995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.789233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.789287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.789317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.793720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.793760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.793774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.798059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.798098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.798112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.802457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.802496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.802510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.806938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.806976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.806990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.811352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.811391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.811404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.815643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.815683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.815697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.820011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.820049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.820062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.824476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.824515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.824529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.828928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.828981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.829011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.833380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.833421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.833436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.837645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.837685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.837698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.842117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.842157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.842170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.846528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.846571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.846585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.850828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.850883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.855045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.855084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.855097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.859265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.859315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.859328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.863570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.863609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.863623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.876 [2024-07-12 12:36:46.867914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.876 [2024-07-12 12:36:46.867952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.876 [2024-07-12 12:36:46.867965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.872251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.872291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.872305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.876722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.876762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.876776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.881219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.881257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.881270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.885690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.885744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.885758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.890138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.890177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.890190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.894573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.894613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.894627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.899060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.899099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.899113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.903390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.903428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.903441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.907828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.907882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.907895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.912216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.912253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.912267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.916459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.916498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.916512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.920796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.920834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.920857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.925052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.925091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.925104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.929199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.929237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.929251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.933501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.933547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.933560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.937857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.937901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.937915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.942025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.942093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.942107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.946326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.946366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.946379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.950763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.950812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.950826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.877 [2024-07-12 12:36:46.955098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:17.877 [2024-07-12 12:36:46.955138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.877 [2024-07-12 12:36:46.955151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.959356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.959395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.959409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.963665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.963704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.963717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.967841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.967878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.967892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.972070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.972109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.972122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.976297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.976336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.976349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.980702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.980742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.984991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.985030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.985043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.989416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.989455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.989484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.993745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.993797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.993812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:46.998076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:46.998115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:46.998129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.002388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.002427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.006824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.006860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.006873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.011142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.011181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.011194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.015471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.015509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.015523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.019743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.019782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.019811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.023983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.024022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.024035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.028322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.028366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.028380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.032638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.137 [2024-07-12 12:36:47.032677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.137 [2024-07-12 12:36:47.032691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.137 [2024-07-12 12:36:47.037033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.037074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.037087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.041328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.041367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.041381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.045627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.045665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.049892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.049930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.049943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.054153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.054192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.054206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.058331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.058380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.058398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.062564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.062604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.062618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.066936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.066976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.066989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.071319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.071363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.071378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.075680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.075730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.080083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.080121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.080136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.084459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.084499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.084512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.088781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.088829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.088844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.093242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.093287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.093301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.097673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.097715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.097729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.102033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.102072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.102086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.106278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.106318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.106332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.110562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.110601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.110616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.114890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.114929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.114943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.119085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.119124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.119138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.123442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.123483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.123497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.127715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.127755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.127768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.131941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.131977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.131991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.136157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.136195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.136209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.140412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.140452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.144729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.144768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.144782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.149014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.149053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.149067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.153202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.153242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.153256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.157450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.138 [2024-07-12 12:36:47.157493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.138 [2024-07-12 12:36:47.157507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.138 [2024-07-12 12:36:47.161723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.161762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.161776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.165980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.166019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.166032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.170213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.170254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.170269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.174492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.174532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.174545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.178885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.178925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.178938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.183166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.183206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.183220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.187404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.187443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.187457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.191685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.191725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.191738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.195977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.196016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.196030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.200207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.200247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.200260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.204581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.204621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.204634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.208894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.208934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.208947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.139 [2024-07-12 12:36:47.213329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.139 [2024-07-12 12:36:47.213369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.139 [2024-07-12 12:36:47.213383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.399 [2024-07-12 12:36:47.217746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.217796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.217811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.222052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.222090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.222104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.226437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.226477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.226490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.230773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.230823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.230837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.234930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.234967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.234981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.239264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.239320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.239334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.243481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.243520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.243534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.247737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.247775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.247804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.252033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.252071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.252085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.256461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.256500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.256514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.260669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.260708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.260722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.265074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.265113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.265126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.269358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.269398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.269411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.273708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.273749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.273763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.278084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.278121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.278135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.282358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.282396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.282409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.286696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.286751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.286764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.291039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.291079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.291093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.295257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.295308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.295322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.299585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.299628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.299642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.303858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.303895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.303909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.308057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.308094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.308107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.312460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.312499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.312513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.316837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.316872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.316886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.321147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.321186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.321209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.325449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.325487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.325501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.329711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.329751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.329764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.333892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.400 [2024-07-12 12:36:47.333928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.400 [2024-07-12 12:36:47.333942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.400 [2024-07-12 12:36:47.338082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.338121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.338135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.342347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.342385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.342399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.346577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.346615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.346629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.351018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.351057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.351070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.355368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.355407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.355421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.359599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.359638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.359651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.364000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.364048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.364062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.368389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.368443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.372738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.372779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.372807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.377018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.377056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.377069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.381295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.381337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.381351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.385656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.385698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.385712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.389991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.390033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.390047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.394239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.394280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.394293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.398411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.398452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.398465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.403020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.403061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.403075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.407351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.407390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.407404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.411595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.411637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.411651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.416120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.416174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.420462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.420503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.420516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.424994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.425034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.425048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.429502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.429543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.429557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.433761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.433830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.433860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.438107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.438146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.438159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.442569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.442610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.442624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.446801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.446838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.446852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.450953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.450992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.451005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.455467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.455507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.455521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.401 [2024-07-12 12:36:47.459807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.401 [2024-07-12 12:36:47.459853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.401 [2024-07-12 12:36:47.459867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.402 [2024-07-12 12:36:47.464275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.402 [2024-07-12 12:36:47.464321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.402 [2024-07-12 12:36:47.464338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.402 [2024-07-12 12:36:47.468860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.402 [2024-07-12 12:36:47.468919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.402 [2024-07-12 12:36:47.468933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.402 [2024-07-12 12:36:47.473501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.402 [2024-07-12 12:36:47.473543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.402 [2024-07-12 12:36:47.473556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.402 [2024-07-12 12:36:47.478082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.402 [2024-07-12 12:36:47.478122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.402 [2024-07-12 12:36:47.478142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.660 [2024-07-12 12:36:47.482840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.660 [2024-07-12 12:36:47.482880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.660 [2024-07-12 12:36:47.482894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.660 [2024-07-12 12:36:47.487713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.487762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.487816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.492718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.492768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.492796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.499450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.499494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.499508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.504668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.504719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.504738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.512913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.513015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.513030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.518917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.518969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.518984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.525683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.525727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.525741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.532661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.533385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.539152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.539195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.539209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.546272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.546314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.546329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.553165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.553246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.553266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.559635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.559678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.559693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.566772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.566857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.566874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.573622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.573768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.573797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.580537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.580587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.580605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.587320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.587403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.587419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.594084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.594267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.594285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.600186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.600228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.600242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.606177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.606218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.606233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.610628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.610670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.610684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.614923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.614963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.614977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.619266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.619330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.619350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.623709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.623749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.623763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.628297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.628337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.628352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.632645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.632686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.632700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.636966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.637006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.637020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.641520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.641560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.641574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.646071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.646108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.646121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.661 [2024-07-12 12:36:47.650381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.661 [2024-07-12 12:36:47.650420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.661 [2024-07-12 12:36:47.650435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.654919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.654959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.659229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.659269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.659292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.663529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.663568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.663581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.668110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.668150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.668164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.672351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.672390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.672404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.676615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.676655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.676669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.681117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.681171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.681201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.685294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.685334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.685347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.689604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.689660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.689674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.694045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.694085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.698369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.698414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.698431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.702712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.702752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.702765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.707099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.707139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.707153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.711375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.711422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.711439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.715738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.715778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.715806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.719992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.720031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.720044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.724415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.724461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.724479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.728751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.728798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.728813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.733030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.733070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.733083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.662 [2024-07-12 12:36:47.737402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.662 [2024-07-12 12:36:47.737440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.662 [2024-07-12 12:36:47.737454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.741723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.741770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.741807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.746089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.746129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.746143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.750478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.750521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.750535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.754860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.754913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.754930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.759044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.759081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.759095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.763385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.763424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.763438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.767706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.767746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.767760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.771945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.771981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.771995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.776453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.776494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.776508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.780892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.780947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.780963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.785303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.785341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.785355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.789593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.789639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.789653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.793944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.798388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.798443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.798458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.802841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.802888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.802906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.807055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.807094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.807108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.811359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.811399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.811413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.815614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.815652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.815666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.819863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.922 [2024-07-12 12:36:47.819932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.922 [2024-07-12 12:36:47.819947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.922 [2024-07-12 12:36:47.824356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.824395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.824409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.829035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.829076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.829090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.833410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.833464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.833494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.837822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.837860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.837874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.842268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.842308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.842322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.846624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.846664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.846677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.851088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.851128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.851142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.855479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.855518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.855532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.860048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.860132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.860162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.864335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.864375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.864388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.868857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.868899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.868914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.873172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.873211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.873224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.877473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.877523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.877538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.881889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.881928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.881942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.886460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.886502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.886516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.890954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.890993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.891007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.895357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.895398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.895412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.899900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.899939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.899953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.904383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.904427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.904441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.908770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.908830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.908845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.913217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.913257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.913270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.917613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.917654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.917669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.921948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.921987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.922008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.926215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.926255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.926269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.930548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.930588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.930602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.935088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.935154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.935184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.939809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.939848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.939862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.944252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.944290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.923 [2024-07-12 12:36:47.949109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.923 [2024-07-12 12:36:47.949161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.923 [2024-07-12 12:36:47.949207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.953553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.953593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.953606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.957953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.957991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.958005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.962439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.962510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.962524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.966873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.966913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.966927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.971223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.971261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.971283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.975588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.975628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.975647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.979861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.979899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.979913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.984284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.984324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.984338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.988767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.988818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.988832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.993382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.993423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.993436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:18.924 [2024-07-12 12:36:47.997907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:18.924 [2024-07-12 12:36:47.997946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.924 [2024-07-12 12:36:47.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.002162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.002217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.006557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.006611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.006625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.011092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.011132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.011145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.015541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.015588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.015606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.020030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.020113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.024584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.024625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.024639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.029086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.029125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.029139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.033389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.033429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.033443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.037832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.037870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.037884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.042298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.042339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.042353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.046725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.046778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.046808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.051118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.051202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.051232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.055702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.055746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.055760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.060316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.060357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.060371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.064601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.064641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.064655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.069110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.069178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.069207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.073689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.073733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.073746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.078231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.078300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.078330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.082938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.082981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.082995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.087491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.087531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.087545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.091895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.091947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.091975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.096394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.096440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.096456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.100997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.101042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.184 [2024-07-12 12:36:48.101056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.184 [2024-07-12 12:36:48.105477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.184 [2024-07-12 12:36:48.105534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.105548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.110229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.110304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.110334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.114793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.114866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.114889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.119291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.119339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.119353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.123968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.124024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.124055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.128510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.128576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.133004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.133059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.133074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.137452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.137497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.137511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.141971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.142030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.142061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.146464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.146516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.146538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.151214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.151293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.151317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.155662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.155719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.155749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.160147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.160190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.160205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.164670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.164716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.164730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.169304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.169347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.173746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.173803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.173819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.178267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.178317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.178332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.182864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.182931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.182945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.187211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.187284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.187303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.191766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.191825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.191850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.196112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.196156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.196170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.200611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.200660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.200677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.205025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.205065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.205080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.209435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.209477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.209491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.213905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.213945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.213960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.218390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.218438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.218456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.222698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.222736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.222750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.227162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.227208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.227222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.231642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.185 [2024-07-12 12:36:48.231685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.185 [2024-07-12 12:36:48.231699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.185 [2024-07-12 12:36:48.236083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.236125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.236139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.186 [2024-07-12 12:36:48.240577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.240617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.240631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.186 [2024-07-12 12:36:48.245052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.245092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.245106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.186 [2024-07-12 12:36:48.249362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.249403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.249417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.186 [2024-07-12 12:36:48.253800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.253838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.253852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.186 [2024-07-12 12:36:48.258296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.258337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.258351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.186 [2024-07-12 12:36:48.262823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.186 [2024-07-12 12:36:48.262862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.186 [2024-07-12 12:36:48.262876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.267382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.267421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.267435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.271857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.271898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.271912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.276231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.276271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.280624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.280664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.280677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.285142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.285184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.285198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.289737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.289778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.289805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.294135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.294175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.294188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.298858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.298897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.298911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.303290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.303340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.303354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.307871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.307908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.307923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.312432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.312473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.312495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.316972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.317011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.317025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.321333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.321373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.321386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.325832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.325879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.325893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.330436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.330481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.330495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.334871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.334911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.334925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.339459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.446 [2024-07-12 12:36:48.339499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.446 [2024-07-12 12:36:48.339513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.446 [2024-07-12 12:36:48.343948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.343988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.344001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.348390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.348430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.348444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.353115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.353155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.353170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.357594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.357634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.362108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.362162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.362180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.366588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.366627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.366641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.371015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.371068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.371113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.375562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.375605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.375619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.380191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.380233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.380247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.384464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.384507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.384520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.389025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.389067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.389081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.393408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.393448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.393462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.397796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.397858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.397877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.402301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.402340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.402354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.406802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.406871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.406885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.411335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.411376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.411391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.415891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.415931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.415944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.420297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.420367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.420381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.424797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.424854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.424869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.429406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.429462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.429477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.433954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.433991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.434004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.438236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.438276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.438290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.447 [2024-07-12 12:36:48.442713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.447 [2024-07-12 12:36:48.442755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.447 [2024-07-12 12:36:48.442768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.447050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.447137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.447166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.451567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.451607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.451621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.456053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.456100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.456114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.460510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.460549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.460563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.464980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.465018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.465032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.469468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.469522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.469544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.474136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.474178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.474192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.478444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.478485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.478499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.483079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.483121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.483135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.487395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.487434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.487449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.491910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.491953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.491967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.496460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.496503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.496517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.500935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.500973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.500988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.505330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.505371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.505384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.510058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.510099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.510114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.514534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.514574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.514587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.518910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.518956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.518974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.448 [2024-07-12 12:36:48.523371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.448 [2024-07-12 12:36:48.523411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.448 [2024-07-12 12:36:48.523425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.527885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.527925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.527939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.532391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.532432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.532446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.536884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.536932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.536949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.541203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.541242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.541255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.545756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.545811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.545826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.550399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.550442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.550456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.554874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.554914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.554928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.559322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.559363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.559378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.563799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.563837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.563851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.568233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.568289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.568303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.572760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.708 [2024-07-12 12:36:48.572813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.708 [2024-07-12 12:36:48.572827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.708 [2024-07-12 12:36:48.577085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.577124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.577138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.581566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.581608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.581623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.586175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.586215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.586228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.590590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.590631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.590644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.595183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.595223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.595236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.599589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.599630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.599644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.603956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.603994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.604008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.608457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.608499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.608513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.612967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.613026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.613041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.617740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.617781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.617808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.622193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.622235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.622249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.626788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.626839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.626853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.631165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.631205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.631218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.635656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.635697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.635711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.640214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.640255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.640268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.644602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.644656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.644675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.649078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.649118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.649132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.653490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.653530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.653544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.657993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.658032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.658057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.662769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.662821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.662836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.667290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.667334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.667348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.671779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.671829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.671843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.676421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.676475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.676490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.680887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.680928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.680942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.685408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.685457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.685474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.689843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.689882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.689896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.694208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.694255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.694268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.698748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.698810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.709 [2024-07-12 12:36:48.698830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.709 [2024-07-12 12:36:48.703296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.709 [2024-07-12 12:36:48.703335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.710 [2024-07-12 12:36:48.703349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.710 [2024-07-12 12:36:48.707929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.710 [2024-07-12 12:36:48.707965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.710 [2024-07-12 12:36:48.707995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:19.710 [2024-07-12 12:36:48.712628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.710 [2024-07-12 12:36:48.712683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.710 [2024-07-12 12:36:48.712712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:19.710 [2024-07-12 12:36:48.717331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.710 [2024-07-12 12:36:48.717391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.710 [2024-07-12 12:36:48.717406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:19.710 [2024-07-12 12:36:48.721757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x998a10) 00:25:19.710 [2024-07-12 12:36:48.721809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.710 [2024-07-12 12:36:48.721825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:19.710 00:25:19.710 Latency(us) 00:25:19.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:19.710 nvme0n1 : 2.00 6872.49 859.06 0.00 0.00 2324.53 1980.97 8043.05 00:25:19.710 =================================================================================================================== 00:25:19.710 Total : 6872.49 859.06 0.00 0.00 2324.53 1980.97 8043.05 00:25:19.710 0 00:25:19.710 12:36:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:19.710 12:36:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:19.710 12:36:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:19.710 | .driver_specific 00:25:19.710 | .nvme_error 00:25:19.710 | .status_code 00:25:19.710 | .command_transient_transport_error' 00:25:19.710 12:36:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:19.968 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 443 > 0 )) 00:25:19.968 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95397 00:25:19.968 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95397 ']' 00:25:19.968 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95397 00:25:19.968 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95397 00:25:20.226 killing process with pid 95397 00:25:20.226 Received shutdown signal, test time was about 2.000000 seconds 00:25:20.226 00:25:20.226 Latency(us) 00:25:20.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.226 =================================================================================================================== 00:25:20.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95397' 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95397 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95397 00:25:20.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95459 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95459 /var/tmp/bperf.sock 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95459 ']' 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.226 12:36:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.484 [2024-07-12 12:36:49.345753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:20.484 [2024-07-12 12:36:49.346119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95459 ] 00:25:20.484 [2024-07-12 12:36:49.484558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.743 [2024-07-12 12:36:49.588050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.743 [2024-07-12 12:36:49.648378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:21.308 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.308 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:21.308 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:21.308 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:21.874 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:21.874 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.874 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.874 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.874 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.874 12:36:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.132 nvme0n1 00:25:22.132 12:36:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:22.132 12:36:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.132 12:36:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.132 12:36:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.132 12:36:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:22.132 12:36:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:22.132 Running I/O for 2 seconds... 00:25:22.132 [2024-07-12 12:36:51.158304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fef90 00:25:22.132 [2024-07-12 12:36:51.160935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.132 [2024-07-12 12:36:51.160982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.132 [2024-07-12 12:36:51.174828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190feb58 00:25:22.132 [2024-07-12 12:36:51.177496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.132 [2024-07-12 12:36:51.177539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.132 [2024-07-12 12:36:51.191624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fe2e8 00:25:22.132 [2024-07-12 12:36:51.194222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.132 [2024-07-12 12:36:51.194271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.132 [2024-07-12 12:36:51.208154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fda78 00:25:22.132 [2024-07-12 12:36:51.210672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.132 [2024-07-12 12:36:51.210721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.224507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fd208 00:25:22.390 [2024-07-12 12:36:51.227146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.227190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.241098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fc998 00:25:22.390 [2024-07-12 12:36:51.243564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.243604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.257463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fc128 00:25:22.390 [2024-07-12 12:36:51.259916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.259956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.273615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fb8b8 00:25:22.390 [2024-07-12 12:36:51.276080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.276130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.289969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fb048 00:25:22.390 [2024-07-12 12:36:51.292403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.292443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.306297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fa7d8 00:25:22.390 [2024-07-12 12:36:51.308666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.308705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.322481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f9f68 00:25:22.390 [2024-07-12 12:36:51.324919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.324959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.338909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f96f8 00:25:22.390 [2024-07-12 12:36:51.341276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.341313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.355179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f8e88 00:25:22.390 [2024-07-12 12:36:51.357507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.357547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.371548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f8618 00:25:22.390 [2024-07-12 12:36:51.373908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.373951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.388732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f7da8 00:25:22.390 [2024-07-12 12:36:51.391139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.406032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f7538 00:25:22.390 [2024-07-12 12:36:51.408480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.408531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.423201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f6cc8 00:25:22.390 [2024-07-12 12:36:51.425581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.425630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.440286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f6458 00:25:22.390 [2024-07-12 12:36:51.442588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.442630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.390 [2024-07-12 12:36:51.457366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f5be8 00:25:22.390 [2024-07-12 12:36:51.459624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.390 [2024-07-12 12:36:51.459668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.473610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f5378 00:25:22.648 [2024-07-12 12:36:51.475916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.475957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.489901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f4b08 00:25:22.648 [2024-07-12 12:36:51.492052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.492091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.506082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f4298 00:25:22.648 [2024-07-12 12:36:51.508233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.508276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.522264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f3a28 00:25:22.648 [2024-07-12 12:36:51.524502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.524543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.538856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f31b8 00:25:22.648 [2024-07-12 12:36:51.540986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.541026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.555256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f2948 00:25:22.648 [2024-07-12 12:36:51.557355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.557394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.571769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f20d8 00:25:22.648 [2024-07-12 12:36:51.573827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.573865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.588093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f1868 00:25:22.648 [2024-07-12 12:36:51.590240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.648 [2024-07-12 12:36:51.590279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.648 [2024-07-12 12:36:51.604373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f0ff8 00:25:22.649 [2024-07-12 12:36:51.606381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.606421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.620443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f0788 00:25:22.649 [2024-07-12 12:36:51.622444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.622484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.636891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eff18 00:25:22.649 [2024-07-12 12:36:51.638844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.638887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.653102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ef6a8 00:25:22.649 [2024-07-12 12:36:51.655038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.655076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.669306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eee38 00:25:22.649 [2024-07-12 12:36:51.671347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.671387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.685590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ee5c8 00:25:22.649 [2024-07-12 12:36:51.687523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.687560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.701831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190edd58 00:25:22.649 [2024-07-12 12:36:51.703704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.703744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.649 [2024-07-12 12:36:51.718030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ed4e8 00:25:22.649 [2024-07-12 12:36:51.719954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.649 [2024-07-12 12:36:51.719991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.734482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ecc78 00:25:22.908 [2024-07-12 12:36:51.736355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.736392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.750891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ec408 00:25:22.908 [2024-07-12 12:36:51.752724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.752762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.767114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ebb98 00:25:22.908 [2024-07-12 12:36:51.768936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.768975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.783151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eb328 00:25:22.908 [2024-07-12 12:36:51.785010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.785049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.799627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eaab8 00:25:22.908 [2024-07-12 12:36:51.801458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.801492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.815794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ea248 00:25:22.908 [2024-07-12 12:36:51.817591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.817628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.831944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e99d8 00:25:22.908 [2024-07-12 12:36:51.833717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.833755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.848458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e9168 00:25:22.908 [2024-07-12 12:36:51.850211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.865310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e88f8 00:25:22.908 [2024-07-12 12:36:51.867081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.867117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.882026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e8088 00:25:22.908 [2024-07-12 12:36:51.883702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.883813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.898582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e7818 00:25:22.908 [2024-07-12 12:36:51.900381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.900417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.915098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e6fa8 00:25:22.908 [2024-07-12 12:36:51.916770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.916817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.931407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e6738 00:25:22.908 [2024-07-12 12:36:51.933112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.933148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.947701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e5ec8 00:25:22.908 [2024-07-12 12:36:51.949343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.949379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.964103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e5658 00:25:22.908 [2024-07-12 12:36:51.965655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.965692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.908 [2024-07-12 12:36:51.980302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e4de8 00:25:22.908 [2024-07-12 12:36:51.981858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.908 [2024-07-12 12:36:51.981893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:23.166 [2024-07-12 12:36:51.996539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e4578 00:25:23.166 [2024-07-12 12:36:51.998108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.166 [2024-07-12 12:36:51.998144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:23.166 [2024-07-12 12:36:52.012857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e3d08 00:25:23.166 [2024-07-12 12:36:52.014361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.166 [2024-07-12 12:36:52.014399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:23.166 [2024-07-12 12:36:52.029173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e3498 00:25:23.167 [2024-07-12 12:36:52.030697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.030734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.045570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e2c28 00:25:23.167 [2024-07-12 12:36:52.047085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.047120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.062035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e23b8 00:25:23.167 [2024-07-12 12:36:52.063560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.063597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.078380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e1b48 00:25:23.167 [2024-07-12 12:36:52.079838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.094630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e12d8 00:25:23.167 [2024-07-12 12:36:52.096064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.096104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.111373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e0a68 00:25:23.167 [2024-07-12 12:36:52.112763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.112818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.127574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e01f8 00:25:23.167 [2024-07-12 12:36:52.129081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.129119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.144129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190df988 00:25:23.167 [2024-07-12 12:36:52.145526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.145566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.160821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190df118 00:25:23.167 [2024-07-12 12:36:52.162273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.162321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.177541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190de8a8 00:25:23.167 [2024-07-12 12:36:52.178928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.193981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190de038 00:25:23.167 [2024-07-12 12:36:52.195311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.195352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.217159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190de038 00:25:23.167 [2024-07-12 12:36:52.219673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.167 [2024-07-12 12:36:52.233347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190de8a8 00:25:23.167 [2024-07-12 12:36:52.235986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.167 [2024-07-12 12:36:52.236026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.249795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190df118 00:25:23.430 [2024-07-12 12:36:52.252300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.252340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.266409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190df988 00:25:23.430 [2024-07-12 12:36:52.269001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.269040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.282671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e01f8 00:25:23.430 [2024-07-12 12:36:52.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.285259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.299416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e0a68 00:25:23.430 [2024-07-12 12:36:52.301869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.301905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.315933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e12d8 00:25:23.430 [2024-07-12 12:36:52.318456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.318520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.332427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e1b48 00:25:23.430 [2024-07-12 12:36:52.334809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.334847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.348695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e23b8 00:25:23.430 [2024-07-12 12:36:52.351164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.351204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.365238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e2c28 00:25:23.430 [2024-07-12 12:36:52.367568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.367608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.381662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e3498 00:25:23.430 [2024-07-12 12:36:52.384024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.384064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.397964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e3d08 00:25:23.430 [2024-07-12 12:36:52.400465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.400504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.414306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e4578 00:25:23.430 [2024-07-12 12:36:52.416686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.431217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e4de8 00:25:23.430 [2024-07-12 12:36:52.433552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.433591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.447976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e5658 00:25:23.430 [2024-07-12 12:36:52.450246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.450286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.464617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e5ec8 00:25:23.430 [2024-07-12 12:36:52.466888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.466927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.481109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e6738 00:25:23.430 [2024-07-12 12:36:52.483365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.483406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:23.430 [2024-07-12 12:36:52.497076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e6fa8 00:25:23.430 [2024-07-12 12:36:52.499240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.430 [2024-07-12 12:36:52.499284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.512954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e7818 00:25:23.687 [2024-07-12 12:36:52.515112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.515148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.528808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e8088 00:25:23.687 [2024-07-12 12:36:52.530941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.530978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.544858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e88f8 00:25:23.687 [2024-07-12 12:36:52.546979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.547016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.560858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e9168 00:25:23.687 [2024-07-12 12:36:52.562977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.563016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.577138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190e99d8 00:25:23.687 [2024-07-12 12:36:52.579239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.579306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.593387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ea248 00:25:23.687 [2024-07-12 12:36:52.595467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.595507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.609550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eaab8 00:25:23.687 [2024-07-12 12:36:52.611621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.611667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.625642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eb328 00:25:23.687 [2024-07-12 12:36:52.627700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.627738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.641694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ebb98 00:25:23.687 [2024-07-12 12:36:52.643721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.643759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.657693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ec408 00:25:23.687 [2024-07-12 12:36:52.659701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.659741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.673877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ecc78 00:25:23.687 [2024-07-12 12:36:52.675877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.687 [2024-07-12 12:36:52.675916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:23.687 [2024-07-12 12:36:52.689921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ed4e8 00:25:23.687 [2024-07-12 12:36:52.691917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.688 [2024-07-12 12:36:52.691958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:23.688 [2024-07-12 12:36:52.705994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190edd58 00:25:23.688 [2024-07-12 12:36:52.707933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.688 [2024-07-12 12:36:52.707975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:23.688 [2024-07-12 12:36:52.721870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ee5c8 00:25:23.688 [2024-07-12 12:36:52.723798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.688 [2024-07-12 12:36:52.723834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.688 [2024-07-12 12:36:52.737752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eee38 00:25:23.688 [2024-07-12 12:36:52.739669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.688 [2024-07-12 12:36:52.739705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:23.688 [2024-07-12 12:36:52.753686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190ef6a8 00:25:23.688 [2024-07-12 12:36:52.755562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.688 [2024-07-12 12:36:52.755597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.769695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190eff18 00:25:23.946 [2024-07-12 12:36:52.771564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.785691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f0788 00:25:23.946 [2024-07-12 12:36:52.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.787587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.801599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f0ff8 00:25:23.946 [2024-07-12 12:36:52.803421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.803459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.817481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f1868 00:25:23.946 [2024-07-12 12:36:52.819268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.819312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.833424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f20d8 00:25:23.946 [2024-07-12 12:36:52.835184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.835220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.849277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f2948 00:25:23.946 [2024-07-12 12:36:52.851017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.851053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.870691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f31b8 00:25:23.946 [2024-07-12 12:36:52.872911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.872961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.889811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f3a28 00:25:23.946 [2024-07-12 12:36:52.892076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.892118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.909049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f4298 00:25:23.946 [2024-07-12 12:36:52.911239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.911288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.928349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f4b08 00:25:23.946 [2024-07-12 12:36:52.930520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.930561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.947671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f5378 00:25:23.946 [2024-07-12 12:36:52.949841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.949883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.967036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f5be8 00:25:23.946 [2024-07-12 12:36:52.969139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:52.985807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f6458 00:25:23.946 [2024-07-12 12:36:52.987879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:52.987919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:53.004526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f6cc8 00:25:23.946 [2024-07-12 12:36:53.006544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:53.006582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.946 [2024-07-12 12:36:53.023175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f7538 00:25:23.946 [2024-07-12 12:36:53.025166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.946 [2024-07-12 12:36:53.025203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:24.205 [2024-07-12 12:36:53.041749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f7da8 00:25:24.205 [2024-07-12 12:36:53.043739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.205 [2024-07-12 12:36:53.043776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:24.205 [2024-07-12 12:36:53.060421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f8618 00:25:24.205 [2024-07-12 12:36:53.062373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.205 [2024-07-12 12:36:53.062409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:24.205 [2024-07-12 12:36:53.079104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f8e88 00:25:24.205 [2024-07-12 12:36:53.081031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.205 [2024-07-12 12:36:53.081067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:24.205 [2024-07-12 12:36:53.097805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f96f8 00:25:24.205 [2024-07-12 12:36:53.099698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.205 [2024-07-12 12:36:53.099740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:24.205 [2024-07-12 12:36:53.116421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190f9f68 00:25:24.205 [2024-07-12 12:36:53.118283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.205 [2024-07-12 12:36:53.118320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:24.205 [2024-07-12 12:36:53.135026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb62240) with pdu=0x2000190fa7d8 00:25:24.205 [2024-07-12 12:36:53.136886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:24.205 [2024-07-12 12:36:53.136922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:24.205 00:25:24.205 Latency(us) 00:25:24.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.205 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:24.205 nvme0n1 : 2.00 15167.58 59.25 0.00 0.00 8430.41 2427.81 31933.91 00:25:24.205 =================================================================================================================== 00:25:24.205 Total : 15167.58 59.25 0.00 0.00 8430.41 2427.81 31933.91 00:25:24.205 0 00:25:24.205 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:24.205 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:24.205 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:24.205 | .driver_specific 00:25:24.205 | .nvme_error 00:25:24.205 | .status_code 00:25:24.205 | .command_transient_transport_error' 00:25:24.205 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95459 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95459 ']' 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95459 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95459 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95459' 00:25:24.463 killing process with pid 95459 00:25:24.463 Received shutdown signal, test time was about 2.000000 seconds 00:25:24.463 00:25:24.463 Latency(us) 00:25:24.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.463 =================================================================================================================== 00:25:24.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95459 00:25:24.463 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95459 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95514 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:24.720 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95514 /var/tmp/bperf.sock 00:25:24.721 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 95514 ']' 00:25:24.721 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.721 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:24.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.721 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.721 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:24.721 12:36:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.721 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.721 Zero copy mechanism will not be used. 00:25:24.721 [2024-07-12 12:36:53.735869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:24.721 [2024-07-12 12:36:53.735967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95514 ] 00:25:24.978 [2024-07-12 12:36:53.873955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.978 [2024-07-12 12:36:53.969711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.978 [2024-07-12 12:36:54.024699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.236 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:25.801 nvme0n1 00:25:25.801 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:25.801 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.801 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.801 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.801 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:25.801 12:36:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:25.801 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.801 Zero copy mechanism will not be used. 00:25:25.801 Running I/O for 2 seconds... 00:25:25.801 [2024-07-12 12:36:54.754974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.801 [2024-07-12 12:36:54.755323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.801 [2024-07-12 12:36:54.755356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.801 [2024-07-12 12:36:54.760312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.801 [2024-07-12 12:36:54.760616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.801 [2024-07-12 12:36:54.760647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.801 [2024-07-12 12:36:54.765596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.801 [2024-07-12 12:36:54.765935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.801 [2024-07-12 12:36:54.765965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.801 [2024-07-12 12:36:54.770820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.771134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.771163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.776003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.776358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.776386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.781310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.781631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.781661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.786387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.786689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.786717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.791649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.791970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.792005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.796799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.797138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.797166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.802017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.802320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.807184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.807504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.807532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.812598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.812907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.812937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.817845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.818178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.818206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.823064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.823411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.823440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.828357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.828651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.828680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.833608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.833950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.833979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.838771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.839104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.839132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.844073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.844394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.844422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.849325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.849625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.849653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.854541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.854836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.854879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.859720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.860032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.860060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.864889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.865186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.865214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.870085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.870382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.870411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.875170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.875509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.875537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.802 [2024-07-12 12:36:54.880400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:25.802 [2024-07-12 12:36:54.880724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.802 [2024-07-12 12:36:54.880753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.885709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.886017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.886045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.891005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.891365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.896332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.896674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.896703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.901657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.901997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.902025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.906880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.907201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.907228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.912037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.912339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.912367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.917334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.917647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.917675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.922555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.922866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.922894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.927780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.928137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.928166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.933119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.933428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.933473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.938371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.938697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.938725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.943689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.944043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.944072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.948934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.949234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.949262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.954078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.954374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.954402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.959308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.959614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.959644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.964446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.964740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.964769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.969648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.969958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.969986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.974825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.975118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.975146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.980048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.062 [2024-07-12 12:36:54.980341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.062 [2024-07-12 12:36:54.980368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.062 [2024-07-12 12:36:54.985232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:54.985545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:54.985574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:54.990389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:54.990685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:54.990713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:54.995548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:54.995857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:54.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.000707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.001017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.001050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.005940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.006236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.006264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.011027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.011342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.011371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.016180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.016474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.016503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.021321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.021617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.021646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.026397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.026692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.026721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.031521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.031859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.036653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.036970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.037002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.041863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.042157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.042184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.046984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.047290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.047317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.052120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.052415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.052444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.057261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.057558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.057587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.062429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.062724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.062754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.067495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.067809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.067838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.072629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.072939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.072967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.077765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.078075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.078109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.082944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.083240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.083268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.088095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.088390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.088418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.093227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.093525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.093555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.098357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.098673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.098701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.103544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.103856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.103887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.108699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.109008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.109042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.113836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.114130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.114159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.118919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.119216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.119245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.124114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.124429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.124458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.129176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.063 [2024-07-12 12:36:55.129470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.063 [2024-07-12 12:36:55.129499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.063 [2024-07-12 12:36:55.134299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.064 [2024-07-12 12:36:55.134596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.064 [2024-07-12 12:36:55.134626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.064 [2024-07-12 12:36:55.139403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.064 [2024-07-12 12:36:55.139702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.064 [2024-07-12 12:36:55.139731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.144551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.144878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.144906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.149701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.150022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.150055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.154894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.155195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.155223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.160056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.160384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.160413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.165239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.165565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.165594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.170350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.170648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.170678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.175534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.175844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.175873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.180625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.180935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.180968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.185781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.186121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.190859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.191155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.191184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.195991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.196286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.196314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.201108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.201403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.201431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.206348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.206657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.206685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.211505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.211814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.211842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.216651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.216977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.217009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.221799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.222127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.222155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.226913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.227222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.227249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.232066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.232360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.232389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.237183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.237507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.237535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.242308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.242618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.242647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.247459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.247753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.247782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.252544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.252851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.252879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.257700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.258007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.258035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.262923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.263234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.263267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.268047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.268341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.268374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.273324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.273637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.273672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.278411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.278706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.278740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.283552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.283860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.283892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.288714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.289020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.289052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.293877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.294171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.294203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.298923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.299216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.299248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.304039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.304332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-07-12 12:36:55.304360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-07-12 12:36:55.309174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.324 [2024-07-12 12:36:55.309497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.309530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.314475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.314798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.314844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.319690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.320003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.320036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.324882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.325177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.325213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.330021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.330318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.330352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.335179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.335494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.335540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.340484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.340779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.340823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.345738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.346057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.346090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.350934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.351240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.351280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.356063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.356390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.356422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.361280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.361606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.366479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.366788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.371643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.371952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.371985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.376793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.377118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.377153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.381994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.382321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.382354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.387238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.387574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.387613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.392542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.392882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.397735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.398085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.398118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.325 [2024-07-12 12:36:55.402933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.325 [2024-07-12 12:36:55.403226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.325 [2024-07-12 12:36:55.403258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.408067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.584 [2024-07-12 12:36:55.408365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-07-12 12:36:55.408398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.413358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.584 [2024-07-12 12:36:55.413691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-07-12 12:36:55.413725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.418656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.584 [2024-07-12 12:36:55.418979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-07-12 12:36:55.419013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.423827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.584 [2024-07-12 12:36:55.424152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-07-12 12:36:55.424184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.429157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.584 [2024-07-12 12:36:55.429452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-07-12 12:36:55.429485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.434327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.584 [2024-07-12 12:36:55.434625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-07-12 12:36:55.434658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-07-12 12:36:55.439479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.439772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.439818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.444686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.445003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.445036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.449902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.450197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.450230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.455043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.455346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.455371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.460088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.460383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.460415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.465246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.465572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.465604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.470444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.470767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.470809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.475626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.475974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.480870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.481165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.481200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.486025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.486366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.486399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.491201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.491519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.491552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.496361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.496683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.496716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.501560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.501865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.501894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.506671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.506980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.507013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.511801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.512093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.512118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.516892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.517194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.517228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.522129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.522434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.522470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.527411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.527710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.527744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.532663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.532992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.533027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.537741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.538081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.538115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.543071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.543381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.543431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.548334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.548631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.548657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.553459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.553762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.553786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.558550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.558889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.558924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.563711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.564021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.564056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.568886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.569210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.569244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.574115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.574426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.574460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.579221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.579563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.579597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.584602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.585 [2024-07-12 12:36:55.584947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-07-12 12:36:55.584984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-07-12 12:36:55.589728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.590074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.590108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.594964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.595262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.595316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.600193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.600508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.600542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.605434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.605795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.610590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.610901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.610925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.615819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.616144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.616176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.621027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.621339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.621371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.626160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.626459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.626491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.631330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.631623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.631655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.636559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.636880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.636912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.641633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.641954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.641986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.646765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.647070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.647102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.651951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.652257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.652288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.657148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.657459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.657492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.586 [2024-07-12 12:36:55.662390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.586 [2024-07-12 12:36:55.662683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.586 [2024-07-12 12:36:55.662716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.667534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.667843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.667875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.672688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.672997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.673030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.677939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.678246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.678278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.683023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.683326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.683358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.688331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.688632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.688665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.693534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.693844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.693882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.698717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.699043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.699075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.703973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.704266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.704298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.709134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.709427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.709459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.714292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.714589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-07-12 12:36:55.714621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.845 [2024-07-12 12:36:55.719479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.845 [2024-07-12 12:36:55.719776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.719823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.724593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.724899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.729705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.730046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.734914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.735215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.735247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.740082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.740375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.740408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.745291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.745601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.745634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.750549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.750861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.750894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.755812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.756111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.756134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.761036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.761332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.761365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.766254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.766552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.766576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.771422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.771717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.771755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.776524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.776831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.776863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.781616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.781925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.781957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.786731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.787040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.787073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.791907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.792207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.792239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.796998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.797297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.797330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.802098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.802396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.802429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.807220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.807541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.807566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.812442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.812736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.812768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.817559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.817881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.817913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.822674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.822980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.823012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.827876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.828216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.832973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.833281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.833312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.838084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.838379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.838413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.843151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.843457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.843481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.848305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.848616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.848650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.853449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.853756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.853798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.858566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.858888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.858922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.863794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.864104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.864138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.868945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.869255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.869287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.874072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.874369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.874403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.879149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.879457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.879489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.884278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.884572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.884605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.889399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.889707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.889739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.894622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.894929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.894960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.899710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.900010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.900042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.904756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.905080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.905114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.909930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.910229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.910259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.915089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.915400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.915433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.920250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:26.846 [2024-07-12 12:36:55.920562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-07-12 12:36:55.920594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-07-12 12:36:55.925350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.925642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.925676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.930433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.930725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.930758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.935541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.935847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.935879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.940680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.940999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.941035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.945795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.946089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.950897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.951192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.951216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.956073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.956368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.956401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.961203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.961497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.961530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.966328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.966623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.966656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.971447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.971740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.971773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.976600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.976913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.976938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.981679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.981986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.106 [2024-07-12 12:36:55.982032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.106 [2024-07-12 12:36:55.986828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.106 [2024-07-12 12:36:55.987131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:55.987155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:55.992001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:55.992293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:55.992324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:55.997105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:55.997424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:55.997458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.002302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.002597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.002629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.007489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.007787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.007830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.012561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.012900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.012933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.017777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.018101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.018139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.023058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.023395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.023427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.028203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.028496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.028528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.033384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.033678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.033711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.038566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.038886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.038919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.043922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.044219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.044252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.049024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.049320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.049352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.054176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.054472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.054506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.059261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.059583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.059616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.064546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.064859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.064891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.069802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.070119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.070151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.075001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.075323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.075355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.080191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.080512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.080544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.085372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.085709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.085742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.090638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.090972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.091005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.095741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.096057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.096091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.100771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.101112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.101144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.105833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.106155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.106187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.111180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.111503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.111535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.116497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.107 [2024-07-12 12:36:56.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-07-12 12:36:56.116838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-07-12 12:36:56.121604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.121914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.121946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.126757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.127065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.127100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.132185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.132493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.132529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.137412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.137721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.137757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.142643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.142951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.142982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.147837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.148163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.148195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.152973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.153282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.158113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.158407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.158440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.163219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.163540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.163573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.168318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.168626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.168659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.173527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.173839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.173871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.178728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.179078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.108 [2024-07-12 12:36:56.183969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.108 [2024-07-12 12:36:56.184263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.108 [2024-07-12 12:36:56.184296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.189060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.189358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.189394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.194203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.194495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.194527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.199348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.199645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.199678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.204467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.204764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.204809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.209595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.209900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.209932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.214736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.215048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.215081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.219887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.220183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.220215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.225013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.225326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.225361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.230215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.230517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.230550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.235448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.235745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.235781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.240671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.240987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.367 [2024-07-12 12:36:56.241019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.367 [2024-07-12 12:36:56.245805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.367 [2024-07-12 12:36:56.246101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.246133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.250938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.251232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.251264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.256067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.256359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.256392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.261220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.261512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.261545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.266330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.266633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.266666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.271484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.271780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.271819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.276618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.276929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.276961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.281753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.282085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.282118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.286920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.287238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.287270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.292199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.292535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.292568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.297403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.297712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.297748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.302532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.302838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.302873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.307652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.307959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.308001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.312829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.313119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.313154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.317976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.318273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.318305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.323270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.323595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.323628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.328422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.328715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.328749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.333629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.333954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.338780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.339104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.339137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.343916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.344209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.344242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.349048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.349344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.349375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.354178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.354471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.354503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.359325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.359622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.368 [2024-07-12 12:36:56.359660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.368 [2024-07-12 12:36:56.364471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.368 [2024-07-12 12:36:56.364764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.364806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.369623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.369930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.369963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.374721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.375032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.375064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.379898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.380205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.380238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.385061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.385358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.385391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.390185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.390501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.390534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.395383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.395680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.395713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.400511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.400835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.400867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.405674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.405986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.406019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.410828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.411138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.411170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.416228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.416525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.416558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.421462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.421794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.421837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.426604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.426942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.426975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.431850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.432143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.432175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.437032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.437327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.437359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.442156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.369 [2024-07-12 12:36:56.442449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.369 [2024-07-12 12:36:56.442483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.369 [2024-07-12 12:36:56.447306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.447606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.447650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.452427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.452721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.452757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.457576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.457894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.457925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.462720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.463029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.463064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.467855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.468147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.468184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.472883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.473178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.478073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.478364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.478397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.483175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.483480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.483513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.488323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.488632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.687 [2024-07-12 12:36:56.488664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.687 [2024-07-12 12:36:56.493465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.687 [2024-07-12 12:36:56.493773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.493816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.498659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.498976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.499009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.503828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.504127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.504156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.508903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.509198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.509226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.513953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.514265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.514293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.519195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.519502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.519531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.524315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.524624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.524653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.529435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.529743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.534567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.534876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.534904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.539674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.539980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.540012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.544820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.545117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.545152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.550056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.550371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.550398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.555114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.555439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.555468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.560281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.560604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.560632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.565518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.565830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.565858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.570660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.570968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.570996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.575810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.576148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.576176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.581044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.581337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.581365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.586410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.586707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.586736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.591649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.591985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.596843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.597141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.597179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.602137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.602475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.602504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.607526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.607848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.607876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.612680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.612987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.613019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.617808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.618120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.618147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.622925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.623217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.623245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.628061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.628360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.628387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.633372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.633665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.688 [2024-07-12 12:36:56.633692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.688 [2024-07-12 12:36:56.638371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.688 [2024-07-12 12:36:56.638708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.638736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.643479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.643788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.643841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.648682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.649020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.649052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.653884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.654197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.654225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.659133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.659458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.659486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.664299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.664598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.664626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.669384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.669675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.669703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.674678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.675010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.675038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.680021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.680368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.680396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.685408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.685720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.685749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.690682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.691034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.691067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.695904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.696204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.696231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.700985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.701305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.706271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.706608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.711332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.711631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.711659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.716415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.716706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.721415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.721705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.721732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.726611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.726953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.726996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.731857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.732148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.732175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.736931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.737260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.737288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.689 [2024-07-12 12:36:56.741981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc2e710) with pdu=0x2000190fef90 00:25:27.689 [2024-07-12 12:36:56.742294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.689 [2024-07-12 12:36:56.742321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.689 00:25:27.689 Latency(us) 00:25:27.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.689 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:27.689 nvme0n1 : 2.00 5973.23 746.65 0.00 0.00 2672.44 2129.92 9055.88 00:25:27.689 =================================================================================================================== 00:25:27.689 Total : 5973.23 746.65 0.00 0.00 2672.44 2129.92 9055.88 00:25:27.689 0 00:25:27.947 12:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:27.947 12:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:27.947 12:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:27.947 | .driver_specific 00:25:27.947 | .nvme_error 00:25:27.947 | .status_code 00:25:27.947 | .command_transient_transport_error' 00:25:27.947 12:36:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95514 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95514 ']' 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95514 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.947 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95514 00:25:28.205 killing process with pid 95514 00:25:28.205 Received shutdown signal, test time was about 2.000000 seconds 00:25:28.205 00:25:28.205 Latency(us) 00:25:28.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.205 =================================================================================================================== 00:25:28.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95514' 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95514 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95514 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95304 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 95304 ']' 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 95304 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95304 00:25:28.205 killing process with pid 95304 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95304' 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 95304 00:25:28.205 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 95304 00:25:28.463 00:25:28.463 real 0m18.302s 00:25:28.463 user 0m35.444s 00:25:28.463 sys 0m4.901s 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:28.463 ************************************ 00:25:28.463 END TEST nvmf_digest_error 00:25:28.463 ************************************ 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.463 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.722 rmmod nvme_tcp 00:25:28.722 rmmod nvme_fabrics 00:25:28.722 rmmod nvme_keyring 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.722 Process with pid 95304 is not found 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 95304 ']' 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 95304 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 95304 ']' 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 95304 00:25:28.722 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (95304) - No such process 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 95304 is not found' 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:28.722 00:25:28.722 real 0m38.138s 00:25:28.722 user 1m12.513s 00:25:28.722 sys 0m10.136s 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.722 12:36:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:28.722 ************************************ 00:25:28.722 END TEST nvmf_digest 00:25:28.722 ************************************ 00:25:28.722 12:36:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.722 12:36:57 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:28.722 12:36:57 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:25:28.722 12:36:57 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:28.722 12:36:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.722 12:36:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.722 12:36:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.722 ************************************ 00:25:28.722 START TEST nvmf_host_multipath 00:25:28.722 ************************************ 00:25:28.722 12:36:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:28.980 * Looking for test storage... 00:25:28.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:28.981 Cannot find device "nvmf_tgt_br" 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.981 Cannot find device "nvmf_tgt_br2" 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:28.981 Cannot find device "nvmf_tgt_br" 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:28.981 Cannot find device "nvmf_tgt_br2" 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:28.981 12:36:57 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:28.981 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:28.981 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:28.981 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:28.981 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:28.981 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:28.981 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:29.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:25:29.240 00:25:29.240 --- 10.0.0.2 ping statistics --- 00:25:29.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.240 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:29.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:25:29.240 00:25:29.240 --- 10.0.0.3 ping statistics --- 00:25:29.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.240 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:25:29.240 00:25:29.240 --- 10.0.0.1 ping statistics --- 00:25:29.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.240 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=95783 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 95783 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95783 ']' 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.240 12:36:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:29.240 [2024-07-12 12:36:58.267733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:29.240 [2024-07-12 12:36:58.267860] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.498 [2024-07-12 12:36:58.409353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:29.498 [2024-07-12 12:36:58.508182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.498 [2024-07-12 12:36:58.508443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.498 [2024-07-12 12:36:58.508574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.498 [2024-07-12 12:36:58.508630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.498 [2024-07-12 12:36:58.508661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.498 [2024-07-12 12:36:58.508841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.498 [2024-07-12 12:36:58.508847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.498 [2024-07-12 12:36:58.562289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95783 00:25:30.430 12:36:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:30.430 [2024-07-12 12:36:59.498292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.688 12:36:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:30.946 Malloc0 00:25:30.946 12:36:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:31.204 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.462 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.463 [2024-07-12 12:37:00.509447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.463 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:31.721 [2024-07-12 12:37:00.729565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95833 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95833 /var/tmp/bdevperf.sock 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95833 ']' 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.721 12:37:00 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:32.673 12:37:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.673 12:37:01 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:25:32.673 12:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:32.931 12:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:33.188 Nvme0n1 00:25:33.188 12:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:33.445 Nvme0n1 00:25:33.703 12:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:33.703 12:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:34.635 12:37:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:34.635 12:37:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:34.893 12:37:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:35.152 12:37:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:35.152 12:37:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95885 00:25:35.152 12:37:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:35.152 12:37:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:41.708 Attaching 4 probes... 00:25:41.708 @path[10.0.0.2, 4421]: 17567 00:25:41.708 @path[10.0.0.2, 4421]: 18063 00:25:41.708 @path[10.0.0.2, 4421]: 17809 00:25:41.708 @path[10.0.0.2, 4421]: 17704 00:25:41.708 @path[10.0.0.2, 4421]: 17952 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95885 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.708 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.967 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:41.967 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95992 00:25:41.967 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:41.967 12:37:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:48.521 12:37:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:48.521 12:37:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.521 Attaching 4 probes... 00:25:48.521 @path[10.0.0.2, 4420]: 17976 00:25:48.521 @path[10.0.0.2, 4420]: 18380 00:25:48.521 @path[10.0.0.2, 4420]: 18057 00:25:48.521 @path[10.0.0.2, 4420]: 17484 00:25:48.521 @path[10.0.0.2, 4420]: 17850 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95992 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:48.521 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:48.782 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:48.782 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96109 00:25:48.782 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:48.782 12:37:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:55.362 12:37:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:55.362 12:37:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:55.363 Attaching 4 probes... 00:25:55.363 @path[10.0.0.2, 4421]: 14330 00:25:55.363 @path[10.0.0.2, 4421]: 17810 00:25:55.363 @path[10.0.0.2, 4421]: 17768 00:25:55.363 @path[10.0.0.2, 4421]: 17749 00:25:55.363 @path[10.0.0.2, 4421]: 17976 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96109 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:55.363 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:55.621 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:55.621 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96223 00:25:55.621 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:55.621 12:37:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:02.185 Attaching 4 probes... 00:26:02.185 00:26:02.185 00:26:02.185 00:26:02.185 00:26:02.185 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96223 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:02.185 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:26:02.186 12:37:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.186 12:37:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:02.443 12:37:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:26:02.443 12:37:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:02.443 12:37:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96330 00:26:02.443 12:37:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:09.013 Attaching 4 probes... 00:26:09.013 @path[10.0.0.2, 4421]: 16141 00:26:09.013 @path[10.0.0.2, 4421]: 16661 00:26:09.013 @path[10.0.0.2, 4421]: 16725 00:26:09.013 @path[10.0.0.2, 4421]: 16256 00:26:09.013 @path[10.0.0.2, 4421]: 16189 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96330 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.013 12:37:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:09.961 12:37:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:09.961 12:37:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96454 00:26:09.961 12:37:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:09.961 12:37:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:16.525 12:37:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:16.525 12:37:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:16.525 Attaching 4 probes... 00:26:16.525 @path[10.0.0.2, 4420]: 16760 00:26:16.525 @path[10.0.0.2, 4420]: 17797 00:26:16.525 @path[10.0.0.2, 4420]: 17897 00:26:16.525 @path[10.0.0.2, 4420]: 17867 00:26:16.525 @path[10.0.0.2, 4420]: 17800 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96454 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:16.525 [2024-07-12 12:37:45.491097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.525 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.782 12:37:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:23.334 12:37:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:23.334 12:37:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96628 00:26:23.334 12:37:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95783 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:23.334 12:37:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:29.940 12:37:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:29.940 12:37:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:29.940 Attaching 4 probes... 00:26:29.940 @path[10.0.0.2, 4421]: 17181 00:26:29.940 @path[10.0.0.2, 4421]: 17560 00:26:29.940 @path[10.0.0.2, 4421]: 17557 00:26:29.940 @path[10.0.0.2, 4421]: 17637 00:26:29.940 @path[10.0.0.2, 4421]: 17667 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96628 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95833 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95833 ']' 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95833 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95833 00:26:29.940 killing process with pid 95833 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95833' 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95833 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95833 00:26:29.940 Connection closed with partial response: 00:26:29.940 00:26:29.940 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95833 00:26:29.940 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:29.940 [2024-07-12 12:37:00.805006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:29.940 [2024-07-12 12:37:00.805213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95833 ] 00:26:29.940 [2024-07-12 12:37:00.945302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.940 [2024-07-12 12:37:01.042504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.940 [2024-07-12 12:37:01.104075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:29.940 Running I/O for 90 seconds... 00:26:29.940 [2024-07-12 12:37:10.870278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.870972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.870994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.940 [2024-07-12 12:37:10.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.940 [2024-07-12 12:37:10.871299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.940 [2024-07-12 12:37:10.871324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.871583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.871629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.871668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.871707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.871887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.871931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.871953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.871970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.941 [2024-07-12 12:37:10.872698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.872977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.872993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.873016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.873040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.873064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.873080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.873103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.873119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.873142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.873158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.873181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.941 [2024-07-12 12:37:10.873197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.941 [2024-07-12 12:37:10.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.873894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.873971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.873993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.942 [2024-07-12 12:37:10.874627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.942 [2024-07-12 12:37:10.874897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.942 [2024-07-12 12:37:10.874913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:10.876495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.876974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.876989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:10.877301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:10.877320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.464588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.464684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.464736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.464774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.464836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.464889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.464952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.464993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.465009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.465047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.465084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.465121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.465158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.943 [2024-07-12 12:37:17.465195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.943 [2024-07-12 12:37:17.465554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.943 [2024-07-12 12:37:17.465582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.465970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.465992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.944 [2024-07-12 12:37:17.466314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.944 [2024-07-12 12:37:17.466916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.944 [2024-07-12 12:37:17.466940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.466965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.466981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.467405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.467978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.467995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.468034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.468072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.468147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.945 [2024-07-12 12:37:17.468184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.945 [2024-07-12 12:37:17.468622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.945 [2024-07-12 12:37:17.468655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.468980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.468996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.469752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.469779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.469832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.469851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.469883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.469899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.469930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.469946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.469977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.469994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:17.470455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:17.470826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:17.470844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.946 [2024-07-12 12:37:24.564725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:24.564781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:24.564863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.946 [2024-07-12 12:37:24.564920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.946 [2024-07-12 12:37:24.564953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.564977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.565662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.565686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.566483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.566969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.566997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.947 [2024-07-12 12:37:24.567484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.567542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.567627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.567696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.947 [2024-07-12 12:37:24.567755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.947 [2024-07-12 12:37:24.567803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.567832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.567867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.567892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.567925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.567949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.567983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.568009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.568876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.568924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.568949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.568969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.569024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.569069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.569113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.569202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-07-12 12:37:24.569247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.948 [2024-07-12 12:37:24.569596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.948 [2024-07-12 12:37:24.569616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.569971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.569990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.570015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.570034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.570061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.570081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.571061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.571123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.571176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.571229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.571294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.949 [2024-07-12 12:37:24.571357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.571999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:24.572350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:24.572370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.949 [2024-07-12 12:37:37.902419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-07-12 12:37:37.902434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.902471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.902508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.902983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.902999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.950 [2024-07-12 12:37:37.903566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.903984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.903998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.904014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.904028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.904044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.904057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.950 [2024-07-12 12:37:37.904076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.950 [2024-07-12 12:37:37.904091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.904805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.904981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.904995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.951 [2024-07-12 12:37:37.905310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.951 [2024-07-12 12:37:37.905466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.951 [2024-07-12 12:37:37.905482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.905980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.905996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.906010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.952 [2024-07-12 12:37:37.906040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d28a0 is same with the state(5) to be set 00:26:29.952 [2024-07-12 12:37:37.906077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62336 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62728 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62736 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62744 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.952 [2024-07-12 12:37:37.906615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.952 [2024-07-12 12:37:37.906626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:26:29.952 [2024-07-12 12:37:37.906640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.952 [2024-07-12 12:37:37.906654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.953 [2024-07-12 12:37:37.906664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.953 [2024-07-12 12:37:37.906675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:26:29.953 [2024-07-12 12:37:37.906689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.906703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.953 [2024-07-12 12:37:37.906713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.953 [2024-07-12 12:37:37.906724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:26:29.953 [2024-07-12 12:37:37.906738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.906752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.953 [2024-07-12 12:37:37.906762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.953 [2024-07-12 12:37:37.906773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:26:29.953 [2024-07-12 12:37:37.906800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.906817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.953 [2024-07-12 12:37:37.906827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.953 [2024-07-12 12:37:37.906838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:26:29.953 [2024-07-12 12:37:37.906852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.906866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.953 [2024-07-12 12:37:37.906877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.953 [2024-07-12 12:37:37.906888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:26:29.953 [2024-07-12 12:37:37.906902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.906916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.953 [2024-07-12 12:37:37.906926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.953 [2024-07-12 12:37:37.906944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:26:29.953 [2024-07-12 12:37:37.906958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.907017] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18d28a0 was disconnected and freed. reset controller. 00:26:29.953 [2024-07-12 12:37:37.908218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.953 [2024-07-12 12:37:37.908299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.953 [2024-07-12 12:37:37.908321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.953 [2024-07-12 12:37:37.908352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e13d0 (9): Bad file descriptor 00:26:29.953 [2024-07-12 12:37:37.908821] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.953 [2024-07-12 12:37:37.908854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e13d0 with addr=10.0.0.2, port=4421 00:26:29.953 [2024-07-12 12:37:37.908871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e13d0 is same with the state(5) to be set 00:26:29.953 [2024-07-12 12:37:37.908951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e13d0 (9): Bad file descriptor 00:26:29.953 [2024-07-12 12:37:37.908990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:29.953 [2024-07-12 12:37:37.909006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:29.953 [2024-07-12 12:37:37.909021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.953 [2024-07-12 12:37:37.909052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:29.953 [2024-07-12 12:37:37.909069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.953 [2024-07-12 12:37:47.965579] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:29.953 Received shutdown signal, test time was about 55.484027 seconds 00:26:29.953 00:26:29.953 Latency(us) 00:26:29.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.953 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.953 Verification LBA range: start 0x0 length 0x4000 00:26:29.953 Nvme0n1 : 55.48 7483.47 29.23 0.00 0.00 17077.70 463.59 7046430.72 00:26:29.953 =================================================================================================================== 00:26:29.953 Total : 7483.47 29.23 0.00 0.00 17077.70 463.59 7046430.72 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.953 rmmod nvme_tcp 00:26:29.953 rmmod nvme_fabrics 00:26:29.953 rmmod nvme_keyring 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 95783 ']' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 95783 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95783 ']' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95783 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95783 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95783' 00:26:29.953 killing process with pid 95783 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95783 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95783 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.953 12:37:58 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.953 12:37:59 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:29.953 00:26:29.953 real 1m1.256s 00:26:29.953 user 2m50.886s 00:26:29.953 sys 0m17.780s 00:26:29.953 12:37:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.953 12:37:59 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:29.953 ************************************ 00:26:29.953 END TEST nvmf_host_multipath 00:26:29.953 ************************************ 00:26:30.212 12:37:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:30.212 12:37:59 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:30.212 12:37:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:30.212 12:37:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.212 12:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.212 ************************************ 00:26:30.212 START TEST nvmf_timeout 00:26:30.212 ************************************ 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:30.212 * Looking for test storage... 00:26:30.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:30.212 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:30.213 Cannot find device "nvmf_tgt_br" 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:30.213 Cannot find device "nvmf_tgt_br2" 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:30.213 Cannot find device "nvmf_tgt_br" 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:30.213 Cannot find device "nvmf_tgt_br2" 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:30.213 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:30.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:30.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:30.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:26:30.472 00:26:30.472 --- 10.0.0.2 ping statistics --- 00:26:30.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.472 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:30.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:30.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:26:30.472 00:26:30.472 --- 10.0.0.3 ping statistics --- 00:26:30.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.472 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:30.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:30.472 00:26:30.472 --- 10.0.0.1 ping statistics --- 00:26:30.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.472 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96935 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96935 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96935 ']' 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.472 12:37:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.731 [2024-07-12 12:37:59.595284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:30.731 [2024-07-12 12:37:59.595358] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.731 [2024-07-12 12:37:59.731658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:30.990 [2024-07-12 12:37:59.818840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.990 [2024-07-12 12:37:59.818902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.990 [2024-07-12 12:37:59.818916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.990 [2024-07-12 12:37:59.818927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.990 [2024-07-12 12:37:59.818936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.990 [2024-07-12 12:37:59.819094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.990 [2024-07-12 12:37:59.819107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.990 [2024-07-12 12:37:59.875907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.556 12:38:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:31.813 [2024-07-12 12:38:00.831880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.813 12:38:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:32.070 Malloc0 00:26:32.328 12:38:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.328 12:38:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.894 [2024-07-12 12:38:01.879421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96983 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96983 /var/tmp/bdevperf.sock 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96983 ']' 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.894 12:38:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.894 [2024-07-12 12:38:01.950987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:32.894 [2024-07-12 12:38:01.951084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96983 ] 00:26:33.152 [2024-07-12 12:38:02.088398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.152 [2024-07-12 12:38:02.187039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.410 [2024-07-12 12:38:02.244890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:33.975 12:38:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:33.975 12:38:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:33.975 12:38:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:34.233 12:38:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:34.489 NVMe0n1 00:26:34.489 12:38:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97008 00:26:34.489 12:38:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:34.489 12:38:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:34.746 Running I/O for 10 seconds... 00:26:35.680 12:38:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.940 [2024-07-12 12:38:04.762263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.762568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180c680 is same with the state(5) to be set 00:26:35.940 [2024-07-12 12:38:04.765027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.940 [2024-07-12 12:38:04.765325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.940 [2024-07-12 12:38:04.765345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.765649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:35.941 [2024-07-12 12:38:04.765987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.765998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.941 [2024-07-12 12:38:04.766007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14366f0 is same with the state(5) to be set 00:26:35.941 [2024-07-12 12:38:04.766029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.941 [2024-07-12 12:38:04.766045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:26:35.941 [2024-07-12 12:38:04.766054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.941 [2024-07-12 12:38:04.766084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:26:35.941 [2024-07-12 12:38:04.766093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.941 [2024-07-12 12:38:04.766119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:26:35.941 [2024-07-12 12:38:04.766128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.941 [2024-07-12 12:38:04.766154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:26:35.941 [2024-07-12 12:38:04.766164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.941 [2024-07-12 12:38:04.766189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:26:35.941 [2024-07-12 12:38:04.766198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.941 [2024-07-12 12:38:04.766223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:26:35.941 [2024-07-12 12:38:04.766233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.941 [2024-07-12 12:38:04.766242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.941 [2024-07-12 12:38:04.766249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62872 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62880 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62888 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62896 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62904 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62912 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62920 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62928 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62936 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62944 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.766970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62952 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.766980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.766990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.766997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.767005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62960 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.767014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.767024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.767032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.767039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62968 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.767048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.767057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.767064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.767072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62976 len:8 PRP1 0x0 PRP2 0x0 00:26:35.942 [2024-07-12 12:38:04.767080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.942 [2024-07-12 12:38:04.767089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.942 [2024-07-12 12:38:04.767097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.942 [2024-07-12 12:38:04.767104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62984 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62992 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63000 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63008 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63016 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63024 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63032 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63040 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63048 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63056 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63064 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63072 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63080 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63088 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63096 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63104 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63112 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63120 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.767750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63128 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.767759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.767769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.767777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.768218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63136 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.768286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.768590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.768636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.768671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63144 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.768805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.768870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.768989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.769074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63152 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.769136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.769189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.769278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.769342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63160 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.769392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.769443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.769474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.769581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63168 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.769645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.769700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.943 [2024-07-12 12:38:04.769816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.943 [2024-07-12 12:38:04.769853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63176 len:8 PRP1 0x0 PRP2 0x0 00:26:35.943 [2024-07-12 12:38:04.769964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.943 [2024-07-12 12:38:04.770021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.770053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.770148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63184 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.770233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.770285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.770316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.770349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63192 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.770399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.770554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.770585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.770616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63200 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.770665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.770807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.770880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.770914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63208 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.770970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.771021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.771052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.771189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63216 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.771254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.771411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.771428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.771437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63224 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.771447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.771457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.771465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.771483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63232 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.771492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.771501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.771509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.771516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63240 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.771525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.771534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.771542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.771550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63248 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.771565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.771574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.771582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63256 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63264 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63272 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63280 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63288 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63296 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 12:38:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:35.944 [2024-07-12 12:38:04.780353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63304 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63312 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63320 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63328 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.944 [2024-07-12 12:38:04.780483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.944 [2024-07-12 12:38:04.780490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.944 [2024-07-12 12:38:04.780498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63336 len:8 PRP1 0x0 PRP2 0x0 00:26:35.944 [2024-07-12 12:38:04.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63344 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63352 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63360 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63368 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63384 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63392 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63400 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:35.945 [2024-07-12 12:38:04.780809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:35.945 [2024-07-12 12:38:04.780819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63408 len:8 PRP1 0x0 PRP2 0x0 00:26:35.945 [2024-07-12 12:38:04.780828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.780888] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14366f0 was disconnected and freed. reset controller. 00:26:35.945 [2024-07-12 12:38:04.781006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.945 [2024-07-12 12:38:04.781023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.781035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.945 [2024-07-12 12:38:04.781044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.781054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.945 [2024-07-12 12:38:04.781063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.781073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.945 [2024-07-12 12:38:04.781082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.945 [2024-07-12 12:38:04.781091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417760 is same with the state(5) to be set 00:26:35.945 [2024-07-12 12:38:04.781310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.945 [2024-07-12 12:38:04.781338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417760 (9): Bad file descriptor 00:26:35.945 [2024-07-12 12:38:04.781450] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.945 [2024-07-12 12:38:04.781478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417760 with addr=10.0.0.2, port=4420 00:26:35.945 [2024-07-12 12:38:04.781490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417760 is same with the state(5) to be set 00:26:35.945 [2024-07-12 12:38:04.781515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417760 (9): Bad file descriptor 00:26:35.945 [2024-07-12 12:38:04.781531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.945 [2024-07-12 12:38:04.781540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.945 [2024-07-12 12:38:04.781551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.945 [2024-07-12 12:38:04.781570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.945 [2024-07-12 12:38:04.781581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:37.844 [2024-07-12 12:38:06.781898] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.844 [2024-07-12 12:38:06.781969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417760 with addr=10.0.0.2, port=4420 00:26:37.844 [2024-07-12 12:38:06.781987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417760 is same with the state(5) to be set 00:26:37.844 [2024-07-12 12:38:06.782016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417760 (9): Bad file descriptor 00:26:37.844 [2024-07-12 12:38:06.782050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:37.844 [2024-07-12 12:38:06.782062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:37.844 [2024-07-12 12:38:06.782073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:37.844 [2024-07-12 12:38:06.782102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.844 [2024-07-12 12:38:06.782113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:37.844 12:38:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:37.844 12:38:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:37.844 12:38:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:38.102 12:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:38.102 12:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:38.102 12:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:38.102 12:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:38.359 12:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:38.359 12:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:39.816 [2024-07-12 12:38:08.782393] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.817 [2024-07-12 12:38:08.782487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417760 with addr=10.0.0.2, port=4420 00:26:39.817 [2024-07-12 12:38:08.782505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417760 is same with the state(5) to be set 00:26:39.817 [2024-07-12 12:38:08.782532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417760 (9): Bad file descriptor 00:26:39.817 [2024-07-12 12:38:08.782570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:39.817 [2024-07-12 12:38:08.782581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:39.817 [2024-07-12 12:38:08.782592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:39.817 [2024-07-12 12:38:08.782621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.817 [2024-07-12 12:38:08.782633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.715 [2024-07-12 12:38:10.782696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.715 [2024-07-12 12:38:10.782755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.715 [2024-07-12 12:38:10.782769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.715 [2024-07-12 12:38:10.782780] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:41.715 [2024-07-12 12:38:10.782821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.087 00:26:43.087 Latency(us) 00:26:43.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.087 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.087 Verification LBA range: start 0x0 length 0x4000 00:26:43.087 NVMe0n1 : 8.12 960.19 3.75 15.76 0.00 131243.68 3961.95 7046430.72 00:26:43.087 =================================================================================================================== 00:26:43.087 Total : 960.19 3.75 15.76 0.00 131243.68 3961.95 7046430.72 00:26:43.087 0 00:26:43.345 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:43.345 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:43.345 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:43.670 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:43.670 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:43.670 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:43.670 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 97008 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96983 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96983 ']' 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96983 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96983 00:26:43.927 killing process with pid 96983 00:26:43.927 Received shutdown signal, test time was about 9.311321 seconds 00:26:43.927 00:26:43.927 Latency(us) 00:26:43.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.927 =================================================================================================================== 00:26:43.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96983' 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96983 00:26:43.927 12:38:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96983 00:26:44.187 12:38:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.444 [2024-07-12 12:38:13.367521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97123 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97123 /var/tmp/bdevperf.sock 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 97123 ']' 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.444 12:38:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.444 [2024-07-12 12:38:13.430257] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:44.444 [2024-07-12 12:38:13.430518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97123 ] 00:26:44.701 [2024-07-12 12:38:13.561252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.701 [2024-07-12 12:38:13.651696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.701 [2024-07-12 12:38:13.705504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:45.630 12:38:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.630 12:38:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:45.630 12:38:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:45.630 12:38:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:46.193 NVMe0n1 00:26:46.193 12:38:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97148 00:26:46.193 12:38:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:46.193 12:38:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:46.193 Running I/O for 10 seconds... 00:26:47.123 12:38:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.383 [2024-07-12 12:38:16.320354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.383 [2024-07-12 12:38:16.320874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.383 [2024-07-12 12:38:16.320885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.320896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.320907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.320917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.320928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.320937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.320948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.320958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.320969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.320978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.384 [2024-07-12 12:38:16.321529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.384 [2024-07-12 12:38:16.321540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.321989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.321998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.385 [2024-07-12 12:38:16.322207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.385 [2024-07-12 12:38:16.322218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.386 [2024-07-12 12:38:16.322817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.386 [2024-07-12 12:38:16.322943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.386 [2024-07-12 12:38:16.322953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.322963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.322973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.322984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:47.387 [2024-07-12 12:38:16.322993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.387 [2024-07-12 12:38:16.323137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1793550 is same with the state(5) to be set 00:26:47.387 [2024-07-12 12:38:16.323160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:47.387 [2024-07-12 12:38:16.323168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:47.387 [2024-07-12 12:38:16.323176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:26:47.387 [2024-07-12 12:38:16.323191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323243] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1793550 was disconnected and freed. reset controller. 00:26:47.387 [2024-07-12 12:38:16.323374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.387 [2024-07-12 12:38:16.323398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.387 [2024-07-12 12:38:16.323420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.387 [2024-07-12 12:38:16.323439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.387 [2024-07-12 12:38:16.323458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.387 [2024-07-12 12:38:16.323467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:26:47.387 [2024-07-12 12:38:16.323685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.387 [2024-07-12 12:38:16.323708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:47.387 [2024-07-12 12:38:16.323819] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.387 [2024-07-12 12:38:16.323841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774760 with addr=10.0.0.2, port=4420 00:26:47.387 [2024-07-12 12:38:16.323852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:26:47.387 [2024-07-12 12:38:16.323871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:47.387 [2024-07-12 12:38:16.323887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:47.387 [2024-07-12 12:38:16.323897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:47.387 [2024-07-12 12:38:16.323907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:47.387 [2024-07-12 12:38:16.323927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:47.387 [2024-07-12 12:38:16.323937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:47.387 12:38:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:48.400 [2024-07-12 12:38:17.324072] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.400 [2024-07-12 12:38:17.324161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774760 with addr=10.0.0.2, port=4420 00:26:48.400 [2024-07-12 12:38:17.324178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:26:48.400 [2024-07-12 12:38:17.324206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:48.400 [2024-07-12 12:38:17.324227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.400 [2024-07-12 12:38:17.324237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.400 [2024-07-12 12:38:17.324249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.400 [2024-07-12 12:38:17.324277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.400 [2024-07-12 12:38:17.324288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.400 12:38:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.657 [2024-07-12 12:38:17.595899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.657 12:38:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 97148 00:26:49.589 [2024-07-12 12:38:18.341668] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:56.155 00:26:56.155 Latency(us) 00:26:56.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.155 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:56.155 Verification LBA range: start 0x0 length 0x4000 00:26:56.155 NVMe0n1 : 10.01 6314.40 24.67 0.00 0.00 20228.53 1325.61 3019898.88 00:26:56.155 =================================================================================================================== 00:26:56.155 Total : 6314.40 24.67 0.00 0.00 20228.53 1325.61 3019898.88 00:26:56.155 0 00:26:56.155 12:38:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97253 00:26:56.155 12:38:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:56.155 12:38:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:56.413 Running I/O for 10 seconds... 00:26:57.406 12:38:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.406 [2024-07-12 12:38:26.426871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.406 [2024-07-12 12:38:26.426933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.426960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.426971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.426984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.426993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.407 [2024-07-12 12:38:26.427751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.407 [2024-07-12 12:38:26.427897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.407 [2024-07-12 12:38:26.427908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.427918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.427929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.427938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.427949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.427958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.427970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.427990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.408 [2024-07-12 12:38:26.428644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.408 [2024-07-12 12:38:26.428756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.408 [2024-07-12 12:38:26.428766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.428980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.428991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.409 [2024-07-12 12:38:26.429559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.409 [2024-07-12 12:38:26.429680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.409 [2024-07-12 12:38:26.429691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:57.410 [2024-07-12 12:38:26.429700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.410 [2024-07-12 12:38:26.429738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:57.410 [2024-07-12 12:38:26.429754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:57.410 [2024-07-12 12:38:26.429763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79672 len:8 PRP1 0x0 PRP2 0x0 00:26:57.410 [2024-07-12 12:38:26.429772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.410 [2024-07-12 12:38:26.429841] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1770870 was disconnected and freed. reset controller. 00:26:57.410 [2024-07-12 12:38:26.430071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.410 [2024-07-12 12:38:26.430160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:57.410 [2024-07-12 12:38:26.430290] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.410 [2024-07-12 12:38:26.430312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774760 with addr=10.0.0.2, port=4420 00:26:57.410 [2024-07-12 12:38:26.430322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:26:57.410 [2024-07-12 12:38:26.430340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:57.410 [2024-07-12 12:38:26.430356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:57.410 [2024-07-12 12:38:26.430366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:57.410 [2024-07-12 12:38:26.430376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:57.410 [2024-07-12 12:38:26.430396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:57.410 [2024-07-12 12:38:26.430407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:57.410 12:38:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:58.781 [2024-07-12 12:38:27.430561] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.781 [2024-07-12 12:38:27.430840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774760 with addr=10.0.0.2, port=4420 00:26:58.781 [2024-07-12 12:38:27.430986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:26:58.781 [2024-07-12 12:38:27.431146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:58.781 [2024-07-12 12:38:27.431317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.781 [2024-07-12 12:38:27.431388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.781 [2024-07-12 12:38:27.431565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.781 [2024-07-12 12:38:27.431637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.781 [2024-07-12 12:38:27.431758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:59.712 [2024-07-12 12:38:28.432033] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.712 [2024-07-12 12:38:28.432247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774760 with addr=10.0.0.2, port=4420 00:26:59.712 [2024-07-12 12:38:28.432390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:26:59.712 [2024-07-12 12:38:28.432646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:26:59.712 [2024-07-12 12:38:28.432810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:59.712 [2024-07-12 12:38:28.432946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:59.712 [2024-07-12 12:38:28.433091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:59.712 [2024-07-12 12:38:28.433216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.712 [2024-07-12 12:38:28.433266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.643 [2024-07-12 12:38:29.434811] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.643 [2024-07-12 12:38:29.435068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1774760 with addr=10.0.0.2, port=4420 00:27:00.643 [2024-07-12 12:38:29.435219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1774760 is same with the state(5) to be set 00:27:00.643 [2024-07-12 12:38:29.435695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1774760 (9): Bad file descriptor 00:27:00.643 [2024-07-12 12:38:29.436099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.643 [2024-07-12 12:38:29.436255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.643 [2024-07-12 12:38:29.436437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.643 [2024-07-12 12:38:29.440309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.643 [2024-07-12 12:38:29.440444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.643 12:38:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.643 [2024-07-12 12:38:29.695971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.901 12:38:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 97253 00:27:01.465 [2024-07-12 12:38:30.478508] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:06.792 00:27:06.792 Latency(us) 00:27:06.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.792 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:06.792 Verification LBA range: start 0x0 length 0x4000 00:27:06.792 NVMe0n1 : 10.01 5494.21 21.46 3702.11 0.00 13889.33 681.43 3019898.88 00:27:06.792 =================================================================================================================== 00:27:06.792 Total : 5494.21 21.46 3702.11 0.00 13889.33 0.00 3019898.88 00:27:06.792 0 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97123 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 97123 ']' 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 97123 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97123 00:27:06.792 killing process with pid 97123 00:27:06.792 Received shutdown signal, test time was about 10.000000 seconds 00:27:06.792 00:27:06.792 Latency(us) 00:27:06.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.792 =================================================================================================================== 00:27:06.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97123' 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 97123 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 97123 00:27:06.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.792 12:38:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97362 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97362 /var/tmp/bdevperf.sock 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 97362 ']' 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.793 12:38:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.793 [2024-07-12 12:38:35.624108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:06.793 [2024-07-12 12:38:35.624413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97362 ] 00:27:06.793 [2024-07-12 12:38:35.764934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.793 [2024-07-12 12:38:35.864072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.052 [2024-07-12 12:38:35.917876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:07.618 12:38:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.618 12:38:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:07.618 12:38:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97378 00:27:07.618 12:38:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97362 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:07.618 12:38:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:07.875 12:38:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:08.439 NVMe0n1 00:27:08.439 12:38:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97424 00:27:08.439 12:38:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:08.439 12:38:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:08.439 Running I/O for 10 seconds... 00:27:09.371 12:38:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.630 [2024-07-12 12:38:38.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.527985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.527994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.631 [2024-07-12 12:38:38.528383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.631 [2024-07-12 12:38:38.528394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.528984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.528993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.529004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.529013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.529024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.529033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.529044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.529053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.529064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.529073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.632 [2024-07-12 12:38:38.529084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.632 [2024-07-12 12:38:38.529093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.633 [2024-07-12 12:38:38.529775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.633 [2024-07-12 12:38:38.529802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.529983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.529995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.634 [2024-07-12 12:38:38.530418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1262f40 is same with the state(5) to be set 00:27:09.634 [2024-07-12 12:38:38.530441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.634 [2024-07-12 12:38:38.530449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.634 [2024-07-12 12:38:38.530457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81456 len:8 PRP1 0x0 PRP2 0x0 00:27:09.634 [2024-07-12 12:38:38.530471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530524] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1262f40 was disconnected and freed. reset controller. 00:27:09.634 [2024-07-12 12:38:38.530603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.634 [2024-07-12 12:38:38.530619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.634 [2024-07-12 12:38:38.530639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.634 [2024-07-12 12:38:38.530649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.634 [2024-07-12 12:38:38.530658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.635 [2024-07-12 12:38:38.530668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.635 [2024-07-12 12:38:38.530682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.635 [2024-07-12 12:38:38.530691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d920 is same with the state(5) to be set 00:27:09.635 [2024-07-12 12:38:38.530956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.635 [2024-07-12 12:38:38.530986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122d920 (9): Bad file descriptor 00:27:09.635 [2024-07-12 12:38:38.531090] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.635 [2024-07-12 12:38:38.531112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122d920 with addr=10.0.0.2, port=4420 00:27:09.635 [2024-07-12 12:38:38.531123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d920 is same with the state(5) to be set 00:27:09.635 [2024-07-12 12:38:38.531141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122d920 (9): Bad file descriptor 00:27:09.635 [2024-07-12 12:38:38.531158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:09.635 [2024-07-12 12:38:38.531167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:09.635 [2024-07-12 12:38:38.531177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.635 [2024-07-12 12:38:38.531197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:09.635 [2024-07-12 12:38:38.531207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.635 12:38:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97424 00:27:11.531 [2024-07-12 12:38:40.531606] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.531 [2024-07-12 12:38:40.531705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122d920 with addr=10.0.0.2, port=4420 00:27:11.531 [2024-07-12 12:38:40.531723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d920 is same with the state(5) to be set 00:27:11.531 [2024-07-12 12:38:40.531753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122d920 (9): Bad file descriptor 00:27:11.531 [2024-07-12 12:38:40.531774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.531 [2024-07-12 12:38:40.531798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:11.531 [2024-07-12 12:38:40.531811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.531 [2024-07-12 12:38:40.531840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:11.531 [2024-07-12 12:38:40.531852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.064 [2024-07-12 12:38:42.532062] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.064 [2024-07-12 12:38:42.532133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x122d920 with addr=10.0.0.2, port=4420 00:27:14.064 [2024-07-12 12:38:42.532150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d920 is same with the state(5) to be set 00:27:14.064 [2024-07-12 12:38:42.532178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122d920 (9): Bad file descriptor 00:27:14.064 [2024-07-12 12:38:42.532198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.064 [2024-07-12 12:38:42.532209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.064 [2024-07-12 12:38:42.532219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.064 [2024-07-12 12:38:42.532247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.064 [2024-07-12 12:38:42.532258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.966 [2024-07-12 12:38:44.532411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.966 [2024-07-12 12:38:44.532486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.966 [2024-07-12 12:38:44.532500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.966 [2024-07-12 12:38:44.532510] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:15.966 [2024-07-12 12:38:44.532540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.532 00:27:16.532 Latency(us) 00:27:16.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.532 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:16.532 NVMe0n1 : 8.20 2091.27 8.17 15.61 0.00 60702.42 8519.68 7015926.69 00:27:16.532 =================================================================================================================== 00:27:16.532 Total : 2091.27 8.17 15.61 0.00 60702.42 8519.68 7015926.69 00:27:16.532 0 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:16.532 Attaching 5 probes... 00:27:16.532 1308.747556: reset bdev controller NVMe0 00:27:16.532 1308.821052: reconnect bdev controller NVMe0 00:27:16.532 3309.238285: reconnect delay bdev controller NVMe0 00:27:16.532 3309.263307: reconnect bdev controller NVMe0 00:27:16.532 5309.717771: reconnect delay bdev controller NVMe0 00:27:16.532 5309.738528: reconnect bdev controller NVMe0 00:27:16.532 7310.177149: reconnect delay bdev controller NVMe0 00:27:16.532 7310.201591: reconnect bdev controller NVMe0 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97378 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97362 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 97362 ']' 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 97362 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97362 00:27:16.532 killing process with pid 97362 00:27:16.532 Received shutdown signal, test time was about 8.262696 seconds 00:27:16.532 00:27:16.532 Latency(us) 00:27:16.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.532 =================================================================================================================== 00:27:16.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97362' 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 97362 00:27:16.532 12:38:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 97362 00:27:16.790 12:38:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.048 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.048 rmmod nvme_tcp 00:27:17.306 rmmod nvme_fabrics 00:27:17.306 rmmod nvme_keyring 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96935 ']' 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96935 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96935 ']' 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96935 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96935 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:17.306 killing process with pid 96935 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96935' 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96935 00:27:17.306 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96935 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:17.564 ************************************ 00:27:17.564 END TEST nvmf_timeout 00:27:17.564 ************************************ 00:27:17.564 00:27:17.564 real 0m47.395s 00:27:17.564 user 2m19.593s 00:27:17.564 sys 0m5.681s 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.564 12:38:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.564 12:38:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:17.564 12:38:46 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:27:17.564 12:38:46 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:17.564 12:38:46 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.564 12:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.564 12:38:46 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:17.564 ************************************ 00:27:17.564 END TEST nvmf_tcp 00:27:17.564 ************************************ 00:27:17.564 00:27:17.564 real 14m58.042s 00:27:17.564 user 39m41.678s 00:27:17.564 sys 4m6.985s 00:27:17.564 12:38:46 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.564 12:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.564 12:38:46 -- common/autotest_common.sh@1142 -- # return 0 00:27:17.564 12:38:46 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:27:17.564 12:38:46 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:17.564 12:38:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:17.564 12:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.564 12:38:46 -- common/autotest_common.sh@10 -- # set +x 00:27:17.564 ************************************ 00:27:17.564 START TEST nvmf_dif 00:27:17.564 ************************************ 00:27:17.564 12:38:46 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:17.822 * Looking for test storage... 00:27:17.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:17.822 12:38:46 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.822 12:38:46 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.823 12:38:46 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.823 12:38:46 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.823 12:38:46 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.823 12:38:46 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.823 12:38:46 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.823 12:38:46 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.823 12:38:46 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:17.823 12:38:46 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.823 12:38:46 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:17.823 12:38:46 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:17.823 12:38:46 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:17.823 12:38:46 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:17.823 12:38:46 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.823 12:38:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:17.823 12:38:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:17.823 Cannot find device "nvmf_tgt_br" 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@155 -- # true 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:17.823 Cannot find device "nvmf_tgt_br2" 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@156 -- # true 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:17.823 Cannot find device "nvmf_tgt_br" 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@158 -- # true 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:17.823 Cannot find device "nvmf_tgt_br2" 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@159 -- # true 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:17.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:17.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:17.823 12:38:46 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.081 12:38:46 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:18.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:27:18.081 00:27:18.081 --- 10.0.0.2 ping statistics --- 00:27:18.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.081 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:18.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:27:18.081 00:27:18.081 --- 10.0.0.3 ping statistics --- 00:27:18.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.081 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:27:18.081 00:27:18.081 --- 10.0.0.1 ping statistics --- 00:27:18.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.081 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:18.081 12:38:47 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:18.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.339 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:18.339 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.598 12:38:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:18.598 12:38:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97853 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:18.598 12:38:47 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97853 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97853 ']' 00:27:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.598 12:38:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.598 [2024-07-12 12:38:47.544508] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:18.598 [2024-07-12 12:38:47.544619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.856 [2024-07-12 12:38:47.687277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.856 [2024-07-12 12:38:47.788183] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.856 [2024-07-12 12:38:47.788239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.856 [2024-07-12 12:38:47.788254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.856 [2024-07-12 12:38:47.788264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.856 [2024-07-12 12:38:47.788273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.856 [2024-07-12 12:38:47.788308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.856 [2024-07-12 12:38:47.846997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:19.788 12:38:48 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.788 12:38:48 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.788 12:38:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:19.788 12:38:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.788 [2024-07-12 12:38:48.677457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.788 12:38:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.788 12:38:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.788 ************************************ 00:27:19.788 START TEST fio_dif_1_default 00:27:19.788 ************************************ 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.788 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.788 bdev_null0 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:19.789 [2024-07-12 12:38:48.725561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.789 { 00:27:19.789 "params": { 00:27:19.789 "name": "Nvme$subsystem", 00:27:19.789 "trtype": "$TEST_TRANSPORT", 00:27:19.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.789 "adrfam": "ipv4", 00:27:19.789 "trsvcid": "$NVMF_PORT", 00:27:19.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.789 "hdgst": ${hdgst:-false}, 00:27:19.789 "ddgst": ${ddgst:-false} 00:27:19.789 }, 00:27:19.789 "method": "bdev_nvme_attach_controller" 00:27:19.789 } 00:27:19.789 EOF 00:27:19.789 )") 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:19.789 "params": { 00:27:19.789 "name": "Nvme0", 00:27:19.789 "trtype": "tcp", 00:27:19.789 "traddr": "10.0.0.2", 00:27:19.789 "adrfam": "ipv4", 00:27:19.789 "trsvcid": "4420", 00:27:19.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:19.789 "hdgst": false, 00:27:19.789 "ddgst": false 00:27:19.789 }, 00:27:19.789 "method": "bdev_nvme_attach_controller" 00:27:19.789 }' 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:19.789 12:38:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.047 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:20.047 fio-3.35 00:27:20.047 Starting 1 thread 00:27:32.255 00:27:32.255 filename0: (groupid=0, jobs=1): err= 0: pid=97925: Fri Jul 12 12:38:59 2024 00:27:32.255 read: IOPS=8383, BW=32.7MiB/s (34.3MB/s)(328MiB/10001msec) 00:27:32.255 slat (nsec): min=6447, max=58114, avg=9043.55, stdev=3742.21 00:27:32.255 clat (usec): min=363, max=3574, avg=450.25, stdev=40.92 00:27:32.255 lat (usec): min=369, max=3584, avg=459.29, stdev=41.62 00:27:32.255 clat percentiles (usec): 00:27:32.255 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:27:32.255 | 30.00th=[ 437], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:27:32.255 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 486], 95.00th=[ 498], 00:27:32.255 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 660], 00:27:32.255 | 99.99th=[ 996] 00:27:32.255 bw ( KiB/s): min=31616, max=34688, per=100.00%, avg=33561.68, stdev=660.34, samples=19 00:27:32.255 iops : min= 7904, max= 8672, avg=8390.42, stdev=165.08, samples=19 00:27:32.255 lat (usec) : 500=95.37%, 750=4.61%, 1000=0.01% 00:27:32.255 lat (msec) : 4=0.01% 00:27:32.255 cpu : usr=83.23%, sys=14.85%, ctx=27, majf=0, minf=0 00:27:32.255 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:32.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.255 issued rwts: total=83848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.255 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:32.255 00:27:32.255 Run status group 0 (all jobs): 00:27:32.255 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=328MiB (343MB), run=10001-10001msec 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.255 ************************************ 00:27:32.255 END TEST fio_dif_1_default 00:27:32.255 ************************************ 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.255 00:27:32.255 real 0m11.004s 00:27:32.255 user 0m8.940s 00:27:32.255 sys 0m1.768s 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.255 12:38:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 12:38:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:32.256 12:38:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:32.256 12:38:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:32.256 12:38:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 ************************************ 00:27:32.256 START TEST fio_dif_1_multi_subsystems 00:27:32.256 ************************************ 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 bdev_null0 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 [2024-07-12 12:38:59.781375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 bdev_null1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.256 { 00:27:32.256 "params": { 00:27:32.256 "name": "Nvme$subsystem", 00:27:32.256 "trtype": "$TEST_TRANSPORT", 00:27:32.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.256 "adrfam": "ipv4", 00:27:32.256 "trsvcid": "$NVMF_PORT", 00:27:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.256 "hdgst": ${hdgst:-false}, 00:27:32.256 "ddgst": ${ddgst:-false} 00:27:32.256 }, 00:27:32.256 "method": "bdev_nvme_attach_controller" 00:27:32.256 } 00:27:32.256 EOF 00:27:32.256 )") 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.256 { 00:27:32.256 "params": { 00:27:32.256 "name": "Nvme$subsystem", 00:27:32.256 "trtype": "$TEST_TRANSPORT", 00:27:32.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.256 "adrfam": "ipv4", 00:27:32.256 "trsvcid": "$NVMF_PORT", 00:27:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.256 "hdgst": ${hdgst:-false}, 00:27:32.256 "ddgst": ${ddgst:-false} 00:27:32.256 }, 00:27:32.256 "method": "bdev_nvme_attach_controller" 00:27:32.256 } 00:27:32.256 EOF 00:27:32.256 )") 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:32.256 "params": { 00:27:32.256 "name": "Nvme0", 00:27:32.256 "trtype": "tcp", 00:27:32.256 "traddr": "10.0.0.2", 00:27:32.256 "adrfam": "ipv4", 00:27:32.256 "trsvcid": "4420", 00:27:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:32.256 "hdgst": false, 00:27:32.256 "ddgst": false 00:27:32.256 }, 00:27:32.256 "method": "bdev_nvme_attach_controller" 00:27:32.256 },{ 00:27:32.256 "params": { 00:27:32.256 "name": "Nvme1", 00:27:32.256 "trtype": "tcp", 00:27:32.256 "traddr": "10.0.0.2", 00:27:32.256 "adrfam": "ipv4", 00:27:32.256 "trsvcid": "4420", 00:27:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:32.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:32.256 "hdgst": false, 00:27:32.256 "ddgst": false 00:27:32.256 }, 00:27:32.256 "method": "bdev_nvme_attach_controller" 00:27:32.256 }' 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:32.256 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:32.257 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:32.257 12:38:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.257 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:32.257 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:32.257 fio-3.35 00:27:32.257 Starting 2 threads 00:27:42.225 00:27:42.225 filename0: (groupid=0, jobs=1): err= 0: pid=98083: Fri Jul 12 12:39:10 2024 00:27:42.225 read: IOPS=4633, BW=18.1MiB/s (19.0MB/s)(181MiB/10001msec) 00:27:42.225 slat (nsec): min=6745, max=61803, avg=14072.10, stdev=4235.88 00:27:42.225 clat (usec): min=674, max=1899, avg=824.25, stdev=55.59 00:27:42.225 lat (usec): min=687, max=1914, avg=838.32, stdev=56.11 00:27:42.225 clat percentiles (usec): 00:27:42.225 | 1.00th=[ 725], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 791], 00:27:42.225 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 832], 00:27:42.225 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 873], 95.00th=[ 906], 00:27:42.225 | 99.00th=[ 1012], 99.50th=[ 1045], 99.90th=[ 1156], 99.95th=[ 1696], 00:27:42.225 | 99.99th=[ 1811] 00:27:42.225 bw ( KiB/s): min=16704, max=19360, per=50.07%, avg=18558.32, stdev=528.24, samples=19 00:27:42.225 iops : min= 4176, max= 4840, avg=4639.58, stdev=132.06, samples=19 00:27:42.225 lat (usec) : 750=3.99%, 1000=94.58% 00:27:42.225 lat (msec) : 2=1.42% 00:27:42.225 cpu : usr=89.39%, sys=9.16%, ctx=40, majf=0, minf=0 00:27:42.225 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:42.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.225 issued rwts: total=46336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:42.225 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:42.225 filename1: (groupid=0, jobs=1): err= 0: pid=98084: Fri Jul 12 12:39:10 2024 00:27:42.225 read: IOPS=4633, BW=18.1MiB/s (19.0MB/s)(181MiB/10001msec) 00:27:42.225 slat (nsec): min=6825, max=61481, avg=13886.06, stdev=4175.85 00:27:42.225 clat (usec): min=621, max=1897, avg=826.08, stdev=61.69 00:27:42.225 lat (usec): min=630, max=1909, avg=839.97, stdev=62.29 00:27:42.225 clat percentiles (usec): 00:27:42.225 | 1.00th=[ 693], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 783], 00:27:42.225 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 832], 00:27:42.225 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 881], 95.00th=[ 914], 00:27:42.225 | 99.00th=[ 1020], 99.50th=[ 1045], 99.90th=[ 1139], 99.95th=[ 1696], 00:27:42.225 | 99.99th=[ 1827] 00:27:42.225 bw ( KiB/s): min=16704, max=19360, per=50.07%, avg=18558.32, stdev=528.24, samples=19 00:27:42.225 iops : min= 4176, max= 4840, avg=4639.58, stdev=132.06, samples=19 00:27:42.225 lat (usec) : 750=7.56%, 1000=90.78% 00:27:42.225 lat (msec) : 2=1.66% 00:27:42.225 cpu : usr=90.23%, sys=8.32%, ctx=10, majf=0, minf=9 00:27:42.225 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:42.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.225 issued rwts: total=46336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:42.225 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:42.225 00:27:42.225 Run status group 0 (all jobs): 00:27:42.225 READ: bw=36.2MiB/s (38.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=362MiB (380MB), run=10001-10001msec 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:42.225 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 ************************************ 00:27:42.226 END TEST fio_dif_1_multi_subsystems 00:27:42.226 ************************************ 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 00:27:42.226 real 0m11.095s 00:27:42.226 user 0m18.694s 00:27:42.226 sys 0m2.035s 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 12:39:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:42.226 12:39:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:42.226 12:39:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:42.226 12:39:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 ************************************ 00:27:42.226 START TEST fio_dif_rand_params 00:27:42.226 ************************************ 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 bdev_null0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:42.226 [2024-07-12 12:39:10.948995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.226 { 00:27:42.226 "params": { 00:27:42.226 "name": "Nvme$subsystem", 00:27:42.226 "trtype": "$TEST_TRANSPORT", 00:27:42.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.226 "adrfam": "ipv4", 00:27:42.226 "trsvcid": "$NVMF_PORT", 00:27:42.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.226 "hdgst": ${hdgst:-false}, 00:27:42.226 "ddgst": ${ddgst:-false} 00:27:42.226 }, 00:27:42.226 "method": "bdev_nvme_attach_controller" 00:27:42.226 } 00:27:42.226 EOF 00:27:42.226 )") 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:42.226 12:39:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.227 "params": { 00:27:42.227 "name": "Nvme0", 00:27:42.227 "trtype": "tcp", 00:27:42.227 "traddr": "10.0.0.2", 00:27:42.227 "adrfam": "ipv4", 00:27:42.227 "trsvcid": "4420", 00:27:42.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.227 "hdgst": false, 00:27:42.227 "ddgst": false 00:27:42.227 }, 00:27:42.227 "method": "bdev_nvme_attach_controller" 00:27:42.227 }' 00:27:42.227 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:42.227 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:42.227 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.227 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:42.227 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:42.227 12:39:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:42.227 12:39:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:42.227 12:39:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:42.227 12:39:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:42.227 12:39:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.227 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:42.227 ... 00:27:42.227 fio-3.35 00:27:42.227 Starting 3 threads 00:27:48.789 00:27:48.789 filename0: (groupid=0, jobs=1): err= 0: pid=98230: Fri Jul 12 12:39:16 2024 00:27:48.789 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(158MiB/5003msec) 00:27:48.789 slat (nsec): min=7625, max=48502, avg=16769.19, stdev=4589.25 00:27:48.789 clat (usec): min=8895, max=56731, avg=11872.40, stdev=2369.03 00:27:48.789 lat (usec): min=8908, max=56762, avg=11889.16, stdev=2369.32 00:27:48.789 clat percentiles (usec): 00:27:48.789 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:27:48.789 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:27:48.789 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[12256], 00:27:48.789 | 99.00th=[16909], 99.50th=[18220], 99.90th=[56886], 99.95th=[56886], 00:27:48.789 | 99.99th=[56886] 00:27:48.789 bw ( KiB/s): min=29952, max=33792, per=33.28%, avg=32170.67, stdev=1354.62, samples=9 00:27:48.789 iops : min= 234, max= 264, avg=251.33, stdev=10.58, samples=9 00:27:48.789 lat (msec) : 10=0.24%, 20=99.52%, 100=0.24% 00:27:48.789 cpu : usr=91.40%, sys=8.02%, ctx=8, majf=0, minf=9 00:27:48.789 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.789 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.789 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:48.789 filename0: (groupid=0, jobs=1): err= 0: pid=98231: Fri Jul 12 12:39:16 2024 00:27:48.789 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(158MiB/5005msec) 00:27:48.789 slat (nsec): min=7822, max=72149, avg=15838.32, stdev=5553.44 00:27:48.789 clat (usec): min=11369, max=48125, avg=11880.42, stdev=2014.69 00:27:48.789 lat (usec): min=11384, max=48157, avg=11896.26, stdev=2015.10 00:27:48.789 clat percentiles (usec): 00:27:48.789 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:27:48.789 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:27:48.789 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[12518], 00:27:48.789 | 99.00th=[17695], 99.50th=[18482], 99.90th=[47973], 99.95th=[47973], 00:27:48.789 | 99.99th=[47973] 00:27:48.789 bw ( KiB/s): min=29952, max=33024, per=33.29%, avg=32179.20, stdev=1225.06, samples=10 00:27:48.789 iops : min= 234, max= 258, avg=251.40, stdev= 9.57, samples=10 00:27:48.789 lat (msec) : 20=99.76%, 50=0.24% 00:27:48.789 cpu : usr=90.95%, sys=8.43%, ctx=10, majf=0, minf=0 00:27:48.789 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.789 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.789 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:48.789 filename0: (groupid=0, jobs=1): err= 0: pid=98232: Fri Jul 12 12:39:16 2024 00:27:48.789 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(158MiB/5004msec) 00:27:48.789 slat (usec): min=6, max=239, avg=16.63, stdev= 7.65 00:27:48.789 clat (usec): min=11353, max=46568, avg=11875.52, stdev=1946.98 00:27:48.789 lat (usec): min=11361, max=46588, avg=11892.15, stdev=1947.56 00:27:48.789 clat percentiles (usec): 00:27:48.789 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:27:48.789 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:27:48.789 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[12387], 00:27:48.789 | 99.00th=[17695], 99.50th=[18220], 99.90th=[46400], 99.95th=[46400], 00:27:48.789 | 99.99th=[46400] 00:27:48.789 bw ( KiB/s): min=29952, max=33792, per=33.28%, avg=32170.67, stdev=1354.62, samples=9 00:27:48.789 iops : min= 234, max= 264, avg=251.33, stdev=10.58, samples=9 00:27:48.789 lat (msec) : 20=99.76%, 50=0.24% 00:27:48.789 cpu : usr=91.03%, sys=8.14%, ctx=47, majf=0, minf=0 00:27:48.789 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.789 issued rwts: total=1260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.789 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:48.789 00:27:48.789 Run status group 0 (all jobs): 00:27:48.789 READ: bw=94.4MiB/s (99.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=473MiB (495MB), run=5003-5005msec 00:27:48.789 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:48.789 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 bdev_null0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 [2024-07-12 12:39:16.942018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 bdev_null1 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 bdev_null2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.790 { 00:27:48.790 "params": { 00:27:48.790 "name": "Nvme$subsystem", 00:27:48.790 "trtype": "$TEST_TRANSPORT", 00:27:48.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.790 "adrfam": "ipv4", 00:27:48.790 "trsvcid": "$NVMF_PORT", 00:27:48.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.790 "hdgst": ${hdgst:-false}, 00:27:48.790 "ddgst": ${ddgst:-false} 00:27:48.790 }, 00:27:48.790 "method": "bdev_nvme_attach_controller" 00:27:48.790 } 00:27:48.790 EOF 00:27:48.790 )") 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.790 { 00:27:48.790 "params": { 00:27:48.790 "name": "Nvme$subsystem", 00:27:48.790 "trtype": "$TEST_TRANSPORT", 00:27:48.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.790 "adrfam": "ipv4", 00:27:48.790 "trsvcid": "$NVMF_PORT", 00:27:48.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.790 "hdgst": ${hdgst:-false}, 00:27:48.790 "ddgst": ${ddgst:-false} 00:27:48.790 }, 00:27:48.790 "method": "bdev_nvme_attach_controller" 00:27:48.790 } 00:27:48.790 EOF 00:27:48.790 )") 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:48.790 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.791 { 00:27:48.791 "params": { 00:27:48.791 "name": "Nvme$subsystem", 00:27:48.791 "trtype": "$TEST_TRANSPORT", 00:27:48.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.791 "adrfam": "ipv4", 00:27:48.791 "trsvcid": "$NVMF_PORT", 00:27:48.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.791 "hdgst": ${hdgst:-false}, 00:27:48.791 "ddgst": ${ddgst:-false} 00:27:48.791 }, 00:27:48.791 "method": "bdev_nvme_attach_controller" 00:27:48.791 } 00:27:48.791 EOF 00:27:48.791 )") 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:48.791 "params": { 00:27:48.791 "name": "Nvme0", 00:27:48.791 "trtype": "tcp", 00:27:48.791 "traddr": "10.0.0.2", 00:27:48.791 "adrfam": "ipv4", 00:27:48.791 "trsvcid": "4420", 00:27:48.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:48.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:48.791 "hdgst": false, 00:27:48.791 "ddgst": false 00:27:48.791 }, 00:27:48.791 "method": "bdev_nvme_attach_controller" 00:27:48.791 },{ 00:27:48.791 "params": { 00:27:48.791 "name": "Nvme1", 00:27:48.791 "trtype": "tcp", 00:27:48.791 "traddr": "10.0.0.2", 00:27:48.791 "adrfam": "ipv4", 00:27:48.791 "trsvcid": "4420", 00:27:48.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.791 "hdgst": false, 00:27:48.791 "ddgst": false 00:27:48.791 }, 00:27:48.791 "method": "bdev_nvme_attach_controller" 00:27:48.791 },{ 00:27:48.791 "params": { 00:27:48.791 "name": "Nvme2", 00:27:48.791 "trtype": "tcp", 00:27:48.791 "traddr": "10.0.0.2", 00:27:48.791 "adrfam": "ipv4", 00:27:48.791 "trsvcid": "4420", 00:27:48.791 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.791 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.791 "hdgst": false, 00:27:48.791 "ddgst": false 00:27:48.791 }, 00:27:48.791 "method": "bdev_nvme_attach_controller" 00:27:48.791 }' 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:48.791 12:39:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.791 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:48.791 ... 00:27:48.791 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:48.791 ... 00:27:48.791 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:48.791 ... 00:27:48.791 fio-3.35 00:27:48.791 Starting 24 threads 00:28:01.004 00:28:01.004 filename0: (groupid=0, jobs=1): err= 0: pid=98329: Fri Jul 12 12:39:27 2024 00:28:01.004 read: IOPS=233, BW=934KiB/s (956kB/s)(9348KiB/10012msec) 00:28:01.004 slat (usec): min=4, max=11030, avg=28.42, stdev=333.27 00:28:01.004 clat (msec): min=21, max=141, avg=68.42, stdev=20.74 00:28:01.004 lat (msec): min=21, max=141, avg=68.45, stdev=20.76 00:28:01.004 clat percentiles (msec): 00:28:01.004 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:28:01.004 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:28:01.004 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 109], 00:28:01.005 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 142], 00:28:01.005 | 99.99th=[ 142] 00:28:01.005 bw ( KiB/s): min= 672, max= 1656, per=4.15%, avg=926.32, stdev=203.13, samples=19 00:28:01.005 iops : min= 168, max= 414, avg=231.58, stdev=50.78, samples=19 00:28:01.005 lat (msec) : 50=22.46%, 100=68.93%, 250=8.60% 00:28:01.005 cpu : usr=37.95%, sys=2.17%, ctx=1075, majf=0, minf=9 00:28:01.005 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:01.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 issued rwts: total=2337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.005 filename0: (groupid=0, jobs=1): err= 0: pid=98330: Fri Jul 12 12:39:27 2024 00:28:01.005 read: IOPS=237, BW=951KiB/s (974kB/s)(9536KiB/10025msec) 00:28:01.005 slat (usec): min=3, max=8023, avg=21.73, stdev=200.69 00:28:01.005 clat (msec): min=22, max=128, avg=67.17, stdev=20.14 00:28:01.005 lat (msec): min=22, max=128, avg=67.19, stdev=20.15 00:28:01.005 clat percentiles (msec): 00:28:01.005 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:28:01.005 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:28:01.005 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 109], 00:28:01.005 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:28:01.005 | 99.99th=[ 129] 00:28:01.005 bw ( KiB/s): min= 672, max= 1560, per=4.25%, avg=947.20, stdev=180.51, samples=20 00:28:01.005 iops : min= 168, max= 390, avg=236.80, stdev=45.13, samples=20 00:28:01.005 lat (msec) : 50=25.71%, 100=66.78%, 250=7.51% 00:28:01.005 cpu : usr=38.10%, sys=2.34%, ctx=1142, majf=0, minf=9 00:28:01.005 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:28:01.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 issued rwts: total=2384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.005 filename0: (groupid=0, jobs=1): err= 0: pid=98332: Fri Jul 12 12:39:27 2024 00:28:01.005 read: IOPS=235, BW=943KiB/s (966kB/s)(9452KiB/10020msec) 00:28:01.005 slat (usec): min=4, max=4031, avg=18.70, stdev=116.83 00:28:01.005 clat (msec): min=20, max=143, avg=67.74, stdev=21.16 00:28:01.005 lat (msec): min=20, max=143, avg=67.75, stdev=21.17 00:28:01.005 clat percentiles (msec): 00:28:01.005 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:28:01.005 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 73], 00:28:01.005 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 111], 00:28:01.005 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 140], 99.95th=[ 144], 00:28:01.005 | 99.99th=[ 144] 00:28:01.005 bw ( KiB/s): min= 640, max= 1616, per=4.21%, avg=938.70, stdev=196.97, samples=20 00:28:01.005 iops : min= 160, max= 404, avg=234.65, stdev=49.24, samples=20 00:28:01.005 lat (msec) : 50=23.78%, 100=66.91%, 250=9.31% 00:28:01.005 cpu : usr=41.34%, sys=2.64%, ctx=1345, majf=0, minf=9 00:28:01.005 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:28:01.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.005 filename0: (groupid=0, jobs=1): err= 0: pid=98333: Fri Jul 12 12:39:27 2024 00:28:01.005 read: IOPS=245, BW=980KiB/s (1004kB/s)(9808KiB/10007msec) 00:28:01.005 slat (usec): min=8, max=8025, avg=25.97, stdev=230.64 00:28:01.005 clat (msec): min=4, max=142, avg=65.17, stdev=21.67 00:28:01.005 lat (msec): min=4, max=142, avg=65.20, stdev=21.66 00:28:01.005 clat percentiles (msec): 00:28:01.005 | 1.00th=[ 21], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:28:01.005 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 72], 00:28:01.005 | 70.00th=[ 75], 80.00th=[ 80], 90.00th=[ 90], 95.00th=[ 111], 00:28:01.005 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 142], 00:28:01.005 | 99.99th=[ 142] 00:28:01.005 bw ( KiB/s): min= 664, max= 1608, per=4.34%, avg=967.26, stdev=194.23, samples=19 00:28:01.005 iops : min= 166, max= 402, avg=241.79, stdev=48.56, samples=19 00:28:01.005 lat (msec) : 10=0.77%, 20=0.12%, 50=27.24%, 100=63.91%, 250=7.95% 00:28:01.005 cpu : usr=42.26%, sys=2.66%, ctx=1431, majf=0, minf=9 00:28:01.005 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:01.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 complete : 0=0.0%, 4=86.7%, 8=13.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 issued rwts: total=2452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.005 filename0: (groupid=0, jobs=1): err= 0: pid=98335: Fri Jul 12 12:39:27 2024 00:28:01.005 read: IOPS=241, BW=968KiB/s (991kB/s)(9688KiB/10010msec) 00:28:01.005 slat (usec): min=3, max=8031, avg=31.18, stdev=319.75 00:28:01.005 clat (msec): min=8, max=150, avg=65.99, stdev=21.34 00:28:01.005 lat (msec): min=12, max=150, avg=66.02, stdev=21.34 00:28:01.005 clat percentiles (msec): 00:28:01.005 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:28:01.005 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 72], 00:28:01.005 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 109], 00:28:01.005 | 99.00th=[ 124], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 150], 00:28:01.005 | 99.99th=[ 150] 00:28:01.005 bw ( KiB/s): min= 672, max= 1616, per=4.30%, avg=959.89, stdev=198.86, samples=19 00:28:01.005 iops : min= 168, max= 404, avg=239.95, stdev=49.74, samples=19 00:28:01.005 lat (msec) : 10=0.04%, 20=0.17%, 50=29.27%, 100=62.80%, 250=7.72% 00:28:01.005 cpu : usr=37.21%, sys=2.29%, ctx=1074, majf=0, minf=9 00:28:01.005 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:01.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.005 issued rwts: total=2422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.005 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.005 filename0: (groupid=0, jobs=1): err= 0: pid=98336: Fri Jul 12 12:39:27 2024 00:28:01.005 read: IOPS=222, BW=889KiB/s (911kB/s)(8912KiB/10022msec) 00:28:01.005 slat (usec): min=7, max=8024, avg=18.05, stdev=169.76 00:28:01.005 clat (msec): min=16, max=139, avg=71.86, stdev=20.87 00:28:01.005 lat (msec): min=16, max=139, avg=71.88, stdev=20.86 00:28:01.005 clat percentiles (msec): 00:28:01.005 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 51], 00:28:01.005 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:28:01.005 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 112], 00:28:01.005 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 127], 99.95th=[ 127], 00:28:01.005 | 99.99th=[ 140] 00:28:01.005 bw ( KiB/s): min= 640, max= 1392, per=3.98%, avg=887.20, stdev=161.32, samples=20 00:28:01.005 iops : min= 160, max= 348, avg=221.80, stdev=40.33, samples=20 00:28:01.006 lat (msec) : 20=0.63%, 50=19.79%, 100=69.57%, 250=10.01% 00:28:01.006 cpu : usr=34.24%, sys=2.03%, ctx=1010, majf=0, minf=9 00:28:01.006 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:01.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.006 filename0: (groupid=0, jobs=1): err= 0: pid=98338: Fri Jul 12 12:39:27 2024 00:28:01.006 read: IOPS=224, BW=897KiB/s (919kB/s)(9000KiB/10033msec) 00:28:01.006 slat (usec): min=7, max=9023, avg=36.25, stdev=422.40 00:28:01.006 clat (msec): min=16, max=147, avg=71.11, stdev=20.73 00:28:01.006 lat (msec): min=16, max=147, avg=71.14, stdev=20.75 00:28:01.006 clat percentiles (msec): 00:28:01.006 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 52], 00:28:01.006 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:28:01.006 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 111], 00:28:01.006 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 148], 00:28:01.006 | 99.99th=[ 148] 00:28:01.006 bw ( KiB/s): min= 640, max= 1416, per=4.02%, avg=896.00, stdev=163.91, samples=20 00:28:01.006 iops : min= 160, max= 354, avg=224.00, stdev=40.98, samples=20 00:28:01.006 lat (msec) : 20=0.62%, 50=17.87%, 100=72.04%, 250=9.47% 00:28:01.006 cpu : usr=31.60%, sys=2.03%, ctx=1007, majf=0, minf=9 00:28:01.006 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=78.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:01.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.006 filename0: (groupid=0, jobs=1): err= 0: pid=98339: Fri Jul 12 12:39:27 2024 00:28:01.006 read: IOPS=215, BW=863KiB/s (883kB/s)(8644KiB/10022msec) 00:28:01.006 slat (usec): min=5, max=9024, avg=25.72, stdev=311.32 00:28:01.006 clat (msec): min=16, max=143, avg=74.02, stdev=20.43 00:28:01.006 lat (msec): min=16, max=143, avg=74.04, stdev=20.43 00:28:01.006 clat percentiles (msec): 00:28:01.006 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 59], 00:28:01.006 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:28:01.006 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 112], 00:28:01.006 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 144], 00:28:01.006 | 99.99th=[ 144] 00:28:01.006 bw ( KiB/s): min= 640, max= 1424, per=3.85%, avg=858.00, stdev=159.79, samples=20 00:28:01.006 iops : min= 160, max= 356, avg=214.50, stdev=39.95, samples=20 00:28:01.006 lat (msec) : 20=0.65%, 50=12.45%, 100=75.52%, 250=11.38% 00:28:01.006 cpu : usr=31.09%, sys=2.04%, ctx=921, majf=0, minf=0 00:28:01.006 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=76.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:01.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 issued rwts: total=2161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.006 filename1: (groupid=0, jobs=1): err= 0: pid=98340: Fri Jul 12 12:39:27 2024 00:28:01.006 read: IOPS=228, BW=913KiB/s (935kB/s)(9156KiB/10023msec) 00:28:01.006 slat (usec): min=8, max=8020, avg=22.18, stdev=205.15 00:28:01.006 clat (msec): min=23, max=136, avg=69.97, stdev=20.06 00:28:01.006 lat (msec): min=23, max=136, avg=69.99, stdev=20.06 00:28:01.006 clat percentiles (msec): 00:28:01.006 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 52], 00:28:01.006 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:28:01.006 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 112], 00:28:01.006 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 128], 99.95th=[ 128], 00:28:01.006 | 99.99th=[ 138] 00:28:01.006 bw ( KiB/s): min= 664, max= 1544, per=4.07%, avg=908.80, stdev=172.45, samples=20 00:28:01.006 iops : min= 166, max= 386, avg=227.20, stdev=43.11, samples=20 00:28:01.006 lat (msec) : 50=17.96%, 100=72.96%, 250=9.09% 00:28:01.006 cpu : usr=44.35%, sys=2.84%, ctx=1324, majf=0, minf=9 00:28:01.006 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:01.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.006 filename1: (groupid=0, jobs=1): err= 0: pid=98341: Fri Jul 12 12:39:27 2024 00:28:01.006 read: IOPS=238, BW=956KiB/s (979kB/s)(9564KiB/10007msec) 00:28:01.006 slat (usec): min=4, max=8032, avg=32.36, stdev=366.17 00:28:01.006 clat (msec): min=3, max=188, avg=66.81, stdev=22.48 00:28:01.006 lat (msec): min=3, max=188, avg=66.84, stdev=22.48 00:28:01.006 clat percentiles (msec): 00:28:01.006 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 48], 00:28:01.006 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:28:01.006 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 109], 00:28:01.006 | 99.00th=[ 128], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 188], 00:28:01.006 | 99.99th=[ 190] 00:28:01.006 bw ( KiB/s): min= 664, max= 1608, per=4.20%, avg=936.74, stdev=194.59, samples=19 00:28:01.006 iops : min= 166, max= 402, avg=234.16, stdev=48.67, samples=19 00:28:01.006 lat (msec) : 4=0.13%, 10=1.25%, 20=0.21%, 50=25.39%, 100=66.08% 00:28:01.006 lat (msec) : 250=6.94% 00:28:01.006 cpu : usr=31.30%, sys=1.92%, ctx=924, majf=0, minf=9 00:28:01.006 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=80.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:28:01.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 complete : 0=0.0%, 4=87.7%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.006 issued rwts: total=2391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.006 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.006 filename1: (groupid=0, jobs=1): err= 0: pid=98342: Fri Jul 12 12:39:27 2024 00:28:01.006 read: IOPS=221, BW=886KiB/s (907kB/s)(8876KiB/10022msec) 00:28:01.006 slat (usec): min=5, max=5034, avg=19.45, stdev=140.02 00:28:01.006 clat (msec): min=16, max=144, avg=72.11, stdev=20.25 00:28:01.006 lat (msec): min=16, max=144, avg=72.13, stdev=20.25 00:28:01.006 clat percentiles (msec): 00:28:01.006 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 56], 00:28:01.006 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:28:01.006 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 102], 95.00th=[ 108], 00:28:01.006 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 144], 00:28:01.006 | 99.99th=[ 144] 00:28:01.006 bw ( KiB/s): min= 664, max= 1520, per=3.95%, avg=881.20, stdev=179.27, samples=20 00:28:01.006 iops : min= 166, max= 380, avg=220.30, stdev=44.82, samples=20 00:28:01.006 lat (msec) : 20=0.63%, 50=16.36%, 100=72.87%, 250=10.14% 00:28:01.006 cpu : usr=35.41%, sys=2.03%, ctx=1203, majf=0, minf=9 00:28:01.007 IO depths : 1=0.1%, 2=2.4%, 4=9.7%, 8=73.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:28:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 complete : 0=0.0%, 4=89.9%, 8=8.0%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 issued rwts: total=2219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.007 filename1: (groupid=0, jobs=1): err= 0: pid=98343: Fri Jul 12 12:39:27 2024 00:28:01.007 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.79MiB/10001msec) 00:28:01.007 slat (usec): min=7, max=8042, avg=30.30, stdev=357.75 00:28:01.007 clat (usec): min=1021, max=142408, avg=63744.13, stdev=24598.65 00:28:01.007 lat (usec): min=1030, max=142428, avg=63774.43, stdev=24602.39 00:28:01.007 clat percentiles (usec): 00:28:01.007 | 1.00th=[ 1401], 5.00th=[ 8848], 10.00th=[ 35914], 20.00th=[ 47973], 00:28:01.007 | 30.00th=[ 48497], 40.00th=[ 59507], 50.00th=[ 68682], 60.00th=[ 71828], 00:28:01.007 | 70.00th=[ 72877], 80.00th=[ 81265], 90.00th=[ 89654], 95.00th=[107480], 00:28:01.007 | 99.00th=[120062], 99.50th=[125305], 99.90th=[129500], 99.95th=[141558], 00:28:01.007 | 99.99th=[141558] 00:28:01.007 bw ( KiB/s): min= 664, max= 1512, per=4.24%, avg=944.89, stdev=180.17, samples=19 00:28:01.007 iops : min= 166, max= 378, avg=236.21, stdev=45.05, samples=19 00:28:01.007 lat (msec) : 2=2.63%, 4=0.88%, 10=1.68%, 20=0.12%, 50=26.86% 00:28:01.007 lat (msec) : 100=60.77%, 250=7.06% 00:28:01.007 cpu : usr=31.55%, sys=2.12%, ctx=1002, majf=0, minf=9 00:28:01.007 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 issued rwts: total=2506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.007 filename1: (groupid=0, jobs=1): err= 0: pid=98344: Fri Jul 12 12:39:27 2024 00:28:01.007 read: IOPS=240, BW=963KiB/s (986kB/s)(9636KiB/10007msec) 00:28:01.007 slat (usec): min=8, max=8038, avg=29.51, stdev=312.02 00:28:01.007 clat (msec): min=7, max=144, avg=66.32, stdev=21.00 00:28:01.007 lat (msec): min=7, max=144, avg=66.35, stdev=21.01 00:28:01.007 clat percentiles (msec): 00:28:01.007 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:28:01.007 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:28:01.007 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 108], 00:28:01.007 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 144], 00:28:01.007 | 99.99th=[ 144] 00:28:01.007 bw ( KiB/s): min= 616, max= 1368, per=4.25%, avg=948.37, stdev=156.67, samples=19 00:28:01.007 iops : min= 154, max= 342, avg=237.05, stdev=39.17, samples=19 00:28:01.007 lat (msec) : 10=0.75%, 20=0.29%, 50=28.44%, 100=63.06%, 250=7.47% 00:28:01.007 cpu : usr=33.42%, sys=2.00%, ctx=976, majf=0, minf=9 00:28:01.007 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 issued rwts: total=2409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.007 filename1: (groupid=0, jobs=1): err= 0: pid=98345: Fri Jul 12 12:39:27 2024 00:28:01.007 read: IOPS=216, BW=867KiB/s (888kB/s)(8704KiB/10036msec) 00:28:01.007 slat (usec): min=7, max=4026, avg=14.81, stdev=86.38 00:28:01.007 clat (msec): min=7, max=142, avg=73.67, stdev=22.22 00:28:01.007 lat (msec): min=7, max=142, avg=73.68, stdev=22.22 00:28:01.007 clat percentiles (msec): 00:28:01.007 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 54], 00:28:01.007 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:28:01.007 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 113], 00:28:01.007 | 99.00th=[ 123], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:28:01.007 | 99.99th=[ 142] 00:28:01.007 bw ( KiB/s): min= 672, max= 1504, per=3.87%, avg=863.35, stdev=183.87, samples=20 00:28:01.007 iops : min= 168, max= 376, avg=215.80, stdev=45.99, samples=20 00:28:01.007 lat (msec) : 10=0.64%, 20=0.74%, 50=14.57%, 100=71.83%, 250=12.22% 00:28:01.007 cpu : usr=38.05%, sys=2.27%, ctx=1140, majf=0, minf=9 00:28:01.007 IO depths : 1=0.1%, 2=2.0%, 4=8.1%, 8=74.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.007 filename1: (groupid=0, jobs=1): err= 0: pid=98346: Fri Jul 12 12:39:27 2024 00:28:01.007 read: IOPS=240, BW=963KiB/s (986kB/s)(9632KiB/10005msec) 00:28:01.007 slat (usec): min=4, max=8023, avg=22.09, stdev=231.80 00:28:01.007 clat (msec): min=5, max=149, avg=66.36, stdev=22.27 00:28:01.007 lat (msec): min=5, max=149, avg=66.38, stdev=22.26 00:28:01.007 clat percentiles (msec): 00:28:01.007 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:28:01.007 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:28:01.007 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 109], 00:28:01.007 | 99.00th=[ 122], 99.50th=[ 133], 99.90th=[ 133], 99.95th=[ 150], 00:28:01.007 | 99.99th=[ 150] 00:28:01.007 bw ( KiB/s): min= 664, max= 1640, per=4.24%, avg=945.26, stdev=200.82, samples=19 00:28:01.007 iops : min= 166, max= 410, avg=236.32, stdev=50.21, samples=19 00:28:01.007 lat (msec) : 10=1.20%, 20=0.12%, 50=28.11%, 100=62.08%, 250=8.47% 00:28:01.007 cpu : usr=31.97%, sys=2.21%, ctx=1136, majf=0, minf=9 00:28:01.007 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.007 filename1: (groupid=0, jobs=1): err= 0: pid=98347: Fri Jul 12 12:39:27 2024 00:28:01.007 read: IOPS=237, BW=950KiB/s (972kB/s)(9512KiB/10016msec) 00:28:01.007 slat (usec): min=5, max=9019, avg=37.29, stdev=397.32 00:28:01.007 clat (msec): min=18, max=124, avg=67.17, stdev=20.65 00:28:01.007 lat (msec): min=18, max=124, avg=67.21, stdev=20.65 00:28:01.007 clat percentiles (msec): 00:28:01.007 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:28:01.007 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:28:01.007 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 110], 00:28:01.007 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 126], 00:28:01.007 | 99.99th=[ 126] 00:28:01.007 bw ( KiB/s): min= 664, max= 1624, per=4.24%, avg=944.70, stdev=188.85, samples=20 00:28:01.007 iops : min= 166, max= 406, avg=236.15, stdev=47.21, samples=20 00:28:01.007 lat (msec) : 20=0.13%, 50=25.36%, 100=67.12%, 250=7.40% 00:28:01.007 cpu : usr=38.25%, sys=2.05%, ctx=1251, majf=0, minf=9 00:28:01.007 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:01.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.007 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.007 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.007 filename2: (groupid=0, jobs=1): err= 0: pid=98348: Fri Jul 12 12:39:27 2024 00:28:01.007 read: IOPS=233, BW=933KiB/s (955kB/s)(9344KiB/10018msec) 00:28:01.007 slat (usec): min=4, max=8026, avg=23.12, stdev=245.53 00:28:01.008 clat (msec): min=17, max=143, avg=68.51, stdev=19.73 00:28:01.008 lat (msec): min=17, max=143, avg=68.53, stdev=19.74 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 49], 00:28:01.008 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:28:01.008 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 108], 00:28:01.008 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 129], 99.95th=[ 129], 00:28:01.008 | 99.99th=[ 144] 00:28:01.008 bw ( KiB/s): min= 672, max= 1472, per=4.16%, avg=928.40, stdev=162.28, samples=20 00:28:01.008 iops : min= 168, max= 368, avg=232.05, stdev=40.58, samples=20 00:28:01.008 lat (msec) : 20=0.21%, 50=22.09%, 100=69.95%, 250=7.75% 00:28:01.008 cpu : usr=34.54%, sys=2.28%, ctx=1201, majf=0, minf=9 00:28:01.008 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.008 filename2: (groupid=0, jobs=1): err= 0: pid=98349: Fri Jul 12 12:39:27 2024 00:28:01.008 read: IOPS=237, BW=949KiB/s (972kB/s)(9504KiB/10017msec) 00:28:01.008 slat (usec): min=4, max=8027, avg=22.65, stdev=233.95 00:28:01.008 clat (msec): min=23, max=144, avg=67.34, stdev=20.64 00:28:01.008 lat (msec): min=23, max=144, avg=67.37, stdev=20.64 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:28:01.008 | 30.00th=[ 54], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:28:01.008 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 109], 00:28:01.008 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:28:01.008 | 99.99th=[ 144] 00:28:01.008 bw ( KiB/s): min= 632, max= 1640, per=4.23%, avg=943.85, stdev=199.49, samples=20 00:28:01.008 iops : min= 158, max= 410, avg=235.95, stdev=49.86, samples=20 00:28:01.008 lat (msec) : 50=24.92%, 100=67.26%, 250=7.83% 00:28:01.008 cpu : usr=36.65%, sys=2.13%, ctx=1231, majf=0, minf=9 00:28:01.008 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.008 filename2: (groupid=0, jobs=1): err= 0: pid=98350: Fri Jul 12 12:39:27 2024 00:28:01.008 read: IOPS=229, BW=918KiB/s (940kB/s)(9228KiB/10051msec) 00:28:01.008 slat (usec): min=7, max=4034, avg=18.04, stdev=118.31 00:28:01.008 clat (msec): min=2, max=150, avg=69.48, stdev=25.41 00:28:01.008 lat (msec): min=2, max=150, avg=69.50, stdev=25.41 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 39], 20.00th=[ 52], 00:28:01.008 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:28:01.008 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 105], 95.00th=[ 113], 00:28:01.008 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 148], 99.95th=[ 150], 00:28:01.008 | 99.99th=[ 150] 00:28:01.008 bw ( KiB/s): min= 576, max= 1904, per=4.12%, avg=918.80, stdev=291.85, samples=20 00:28:01.008 iops : min= 144, max= 476, avg=229.70, stdev=72.96, samples=20 00:28:01.008 lat (msec) : 4=2.77%, 10=1.99%, 20=0.69%, 50=12.53%, 100=69.61% 00:28:01.008 lat (msec) : 250=12.40% 00:28:01.008 cpu : usr=43.99%, sys=2.51%, ctx=1443, majf=0, minf=0 00:28:01.008 IO depths : 1=0.2%, 2=2.1%, 4=7.8%, 8=74.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.008 filename2: (groupid=0, jobs=1): err= 0: pid=98351: Fri Jul 12 12:39:27 2024 00:28:01.008 read: IOPS=238, BW=955KiB/s (978kB/s)(9592KiB/10048msec) 00:28:01.008 slat (usec): min=7, max=8036, avg=23.39, stdev=254.51 00:28:01.008 clat (msec): min=2, max=144, avg=66.84, stdev=24.79 00:28:01.008 lat (msec): min=2, max=144, avg=66.87, stdev=24.80 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 36], 20.00th=[ 48], 00:28:01.008 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 72], 00:28:01.008 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:28:01.008 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 142], 00:28:01.008 | 99.99th=[ 144] 00:28:01.008 bw ( KiB/s): min= 616, max= 1792, per=4.27%, avg=952.80, stdev=275.73, samples=20 00:28:01.008 iops : min= 154, max= 448, avg=238.20, stdev=68.93, samples=20 00:28:01.008 lat (msec) : 4=2.00%, 10=2.59%, 20=0.67%, 50=17.81%, 100=69.02% 00:28:01.008 lat (msec) : 250=7.92% 00:28:01.008 cpu : usr=35.57%, sys=2.11%, ctx=991, majf=0, minf=0 00:28:01.008 IO depths : 1=0.2%, 2=1.2%, 4=4.1%, 8=78.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.008 filename2: (groupid=0, jobs=1): err= 0: pid=98352: Fri Jul 12 12:39:27 2024 00:28:01.008 read: IOPS=231, BW=925KiB/s (947kB/s)(9284KiB/10041msec) 00:28:01.008 slat (usec): min=4, max=8027, avg=35.86, stdev=390.19 00:28:01.008 clat (msec): min=8, max=150, avg=69.05, stdev=22.21 00:28:01.008 lat (msec): min=8, max=150, avg=69.08, stdev=22.21 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 48], 00:28:01.008 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:28:01.008 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 110], 00:28:01.008 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:28:01.008 | 99.99th=[ 150] 00:28:01.008 bw ( KiB/s): min= 672, max= 1536, per=4.13%, avg=921.30, stdev=204.60, samples=20 00:28:01.008 iops : min= 168, max= 384, avg=230.30, stdev=51.18, samples=20 00:28:01.008 lat (msec) : 10=1.29%, 20=0.17%, 50=21.46%, 100=68.33%, 250=8.75% 00:28:01.008 cpu : usr=34.62%, sys=2.38%, ctx=987, majf=0, minf=9 00:28:01.008 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.3%, 16=16.8%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 issued rwts: total=2321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.008 filename2: (groupid=0, jobs=1): err= 0: pid=98353: Fri Jul 12 12:39:27 2024 00:28:01.008 read: IOPS=218, BW=873KiB/s (894kB/s)(8740KiB/10014msec) 00:28:01.008 slat (usec): min=8, max=4029, avg=20.39, stdev=138.18 00:28:01.008 clat (msec): min=22, max=155, avg=73.21, stdev=21.93 00:28:01.008 lat (msec): min=22, max=155, avg=73.23, stdev=21.93 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 53], 00:28:01.008 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 77], 00:28:01.008 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 114], 00:28:01.008 | 99.00th=[ 125], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:28:01.008 | 99.99th=[ 157] 00:28:01.008 bw ( KiB/s): min= 656, max= 1584, per=3.90%, avg=870.00, stdev=202.63, samples=20 00:28:01.008 iops : min= 164, max= 396, avg=217.50, stdev=50.66, samples=20 00:28:01.008 lat (msec) : 50=15.56%, 100=71.21%, 250=13.23% 00:28:01.008 cpu : usr=41.29%, sys=2.74%, ctx=1437, majf=0, minf=9 00:28:01.008 IO depths : 1=0.1%, 2=2.7%, 4=10.9%, 8=71.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.008 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.008 filename2: (groupid=0, jobs=1): err= 0: pid=98354: Fri Jul 12 12:39:27 2024 00:28:01.008 read: IOPS=240, BW=962KiB/s (985kB/s)(9628KiB/10009msec) 00:28:01.008 slat (usec): min=4, max=8031, avg=27.93, stdev=270.71 00:28:01.008 clat (msec): min=18, max=141, avg=66.36, stdev=20.56 00:28:01.008 lat (msec): min=18, max=141, avg=66.39, stdev=20.56 00:28:01.008 clat percentiles (msec): 00:28:01.008 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 00:28:01.008 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:28:01.008 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 107], 00:28:01.008 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 142], 00:28:01.008 | 99.99th=[ 142] 00:28:01.008 bw ( KiB/s): min= 664, max= 1640, per=4.28%, avg=955.53, stdev=200.39, samples=19 00:28:01.008 iops : min= 166, max= 410, avg=238.84, stdev=50.14, samples=19 00:28:01.008 lat (msec) : 20=0.37%, 50=25.01%, 100=66.51%, 250=8.10% 00:28:01.008 cpu : usr=41.50%, sys=2.62%, ctx=1235, majf=0, minf=9 00:28:01.008 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:28:01.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.008 complete : 0=0.0%, 4=87.2%, 8=12.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.009 issued rwts: total=2407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.009 filename2: (groupid=0, jobs=1): err= 0: pid=98355: Fri Jul 12 12:39:27 2024 00:28:01.009 read: IOPS=229, BW=918KiB/s (940kB/s)(9200KiB/10022msec) 00:28:01.009 slat (usec): min=4, max=8033, avg=29.47, stdev=313.89 00:28:01.009 clat (msec): min=23, max=143, avg=69.53, stdev=20.68 00:28:01.009 lat (msec): min=23, max=143, avg=69.56, stdev=20.68 00:28:01.009 clat percentiles (msec): 00:28:01.009 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 49], 00:28:01.009 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:28:01.009 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:28:01.009 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:28:01.009 | 99.99th=[ 144] 00:28:01.009 bw ( KiB/s): min= 640, max= 1584, per=4.11%, avg=916.05, stdev=184.34, samples=20 00:28:01.009 iops : min= 160, max= 396, avg=229.00, stdev=46.09, samples=20 00:28:01.009 lat (msec) : 50=23.09%, 100=68.57%, 250=8.35% 00:28:01.009 cpu : usr=31.03%, sys=2.12%, ctx=921, majf=0, minf=9 00:28:01.009 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:28:01.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.009 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.009 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.009 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:01.009 00:28:01.009 Run status group 0 (all jobs): 00:28:01.009 READ: bw=21.8MiB/s (22.8MB/s), 863KiB/s-1002KiB/s (883kB/s-1026kB/s), io=219MiB (229MB), run=10001-10051msec 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 bdev_null0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 [2024-07-12 12:39:28.297237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 bdev_null1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.009 { 00:28:01.009 "params": { 00:28:01.009 "name": "Nvme$subsystem", 00:28:01.009 "trtype": "$TEST_TRANSPORT", 00:28:01.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.009 "adrfam": "ipv4", 00:28:01.009 "trsvcid": "$NVMF_PORT", 00:28:01.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.009 "hdgst": ${hdgst:-false}, 00:28:01.009 "ddgst": ${ddgst:-false} 00:28:01.009 }, 00:28:01.009 "method": "bdev_nvme_attach_controller" 00:28:01.009 } 00:28:01.009 EOF 00:28:01.009 )") 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:01.009 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.010 { 00:28:01.010 "params": { 00:28:01.010 "name": "Nvme$subsystem", 00:28:01.010 "trtype": "$TEST_TRANSPORT", 00:28:01.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.010 "adrfam": "ipv4", 00:28:01.010 "trsvcid": "$NVMF_PORT", 00:28:01.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.010 "hdgst": ${hdgst:-false}, 00:28:01.010 "ddgst": ${ddgst:-false} 00:28:01.010 }, 00:28:01.010 "method": "bdev_nvme_attach_controller" 00:28:01.010 } 00:28:01.010 EOF 00:28:01.010 )") 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:01.010 "params": { 00:28:01.010 "name": "Nvme0", 00:28:01.010 "trtype": "tcp", 00:28:01.010 "traddr": "10.0.0.2", 00:28:01.010 "adrfam": "ipv4", 00:28:01.010 "trsvcid": "4420", 00:28:01.010 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.010 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.010 "hdgst": false, 00:28:01.010 "ddgst": false 00:28:01.010 }, 00:28:01.010 "method": "bdev_nvme_attach_controller" 00:28:01.010 },{ 00:28:01.010 "params": { 00:28:01.010 "name": "Nvme1", 00:28:01.010 "trtype": "tcp", 00:28:01.010 "traddr": "10.0.0.2", 00:28:01.010 "adrfam": "ipv4", 00:28:01.010 "trsvcid": "4420", 00:28:01.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:01.010 "hdgst": false, 00:28:01.010 "ddgst": false 00:28:01.010 }, 00:28:01.010 "method": "bdev_nvme_attach_controller" 00:28:01.010 }' 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:01.010 12:39:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.010 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:01.010 ... 00:28:01.010 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:01.010 ... 00:28:01.010 fio-3.35 00:28:01.010 Starting 4 threads 00:28:05.191 00:28:05.191 filename0: (groupid=0, jobs=1): err= 0: pid=98495: Fri Jul 12 12:39:34 2024 00:28:05.191 read: IOPS=1810, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5001msec) 00:28:05.191 slat (nsec): min=7214, max=81939, avg=17861.33, stdev=9444.47 00:28:05.191 clat (usec): min=782, max=7912, avg=4358.23, stdev=1014.03 00:28:05.191 lat (usec): min=791, max=7939, avg=4376.09, stdev=1013.52 00:28:05.191 clat percentiles (usec): 00:28:05.191 | 1.00th=[ 1860], 5.00th=[ 2278], 10.00th=[ 2638], 20.00th=[ 3458], 00:28:05.191 | 30.00th=[ 4228], 40.00th=[ 4555], 50.00th=[ 4686], 60.00th=[ 4752], 00:28:05.191 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5669], 00:28:05.191 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6521], 99.95th=[ 6783], 00:28:05.191 | 99.99th=[ 7898] 00:28:05.191 bw ( KiB/s): min=12160, max=17232, per=22.93%, avg=14613.33, stdev=1956.90, samples=9 00:28:05.191 iops : min= 1520, max= 2154, avg=1826.67, stdev=244.61, samples=9 00:28:05.191 lat (usec) : 1000=0.21% 00:28:05.191 lat (msec) : 2=2.49%, 4=23.46%, 10=73.84% 00:28:05.191 cpu : usr=93.62%, sys=5.60%, ctx=4, majf=0, minf=9 00:28:05.191 IO depths : 1=0.1%, 2=15.8%, 4=55.7%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.191 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.191 issued rwts: total=9052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.191 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.191 filename0: (groupid=0, jobs=1): err= 0: pid=98496: Fri Jul 12 12:39:34 2024 00:28:05.191 read: IOPS=2128, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5003msec) 00:28:05.191 slat (nsec): min=6898, max=84023, avg=20214.92, stdev=8691.49 00:28:05.191 clat (usec): min=963, max=7626, avg=3703.90, stdev=1087.11 00:28:05.191 lat (usec): min=973, max=7652, avg=3724.11, stdev=1086.54 00:28:05.191 clat percentiles (usec): 00:28:05.191 | 1.00th=[ 1336], 5.00th=[ 2089], 10.00th=[ 2278], 20.00th=[ 2573], 00:28:05.191 | 30.00th=[ 2835], 40.00th=[ 3228], 50.00th=[ 3916], 60.00th=[ 4359], 00:28:05.191 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5145], 00:28:05.191 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[ 6390], 99.95th=[ 6652], 00:28:05.191 | 99.99th=[ 7111] 00:28:05.191 bw ( KiB/s): min=15472, max=18464, per=26.46%, avg=16867.56, stdev=1014.07, samples=9 00:28:05.191 iops : min= 1934, max= 2308, avg=2108.44, stdev=126.76, samples=9 00:28:05.191 lat (usec) : 1000=0.03% 00:28:05.191 lat (msec) : 2=3.46%, 4=49.18%, 10=47.34% 00:28:05.191 cpu : usr=91.80%, sys=6.86%, ctx=10, majf=0, minf=0 00:28:05.191 IO depths : 1=0.1%, 2=3.9%, 4=62.1%, 8=34.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.191 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.191 issued rwts: total=10651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.191 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.191 filename1: (groupid=0, jobs=1): err= 0: pid=98497: Fri Jul 12 12:39:34 2024 00:28:05.191 read: IOPS=2167, BW=16.9MiB/s (17.8MB/s)(84.7MiB/5003msec) 00:28:05.191 slat (usec): min=7, max=227, avg=15.80, stdev= 9.13 00:28:05.191 clat (usec): min=977, max=8335, avg=3647.73, stdev=1077.66 00:28:05.191 lat (usec): min=987, max=8362, avg=3663.53, stdev=1077.39 00:28:05.192 clat percentiles (usec): 00:28:05.192 | 1.00th=[ 1450], 5.00th=[ 2040], 10.00th=[ 2212], 20.00th=[ 2540], 00:28:05.192 | 30.00th=[ 2802], 40.00th=[ 3163], 50.00th=[ 3785], 60.00th=[ 4293], 00:28:05.192 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5080], 00:28:05.192 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 6849], 99.95th=[ 6915], 00:28:05.192 | 99.99th=[ 7111] 00:28:05.192 bw ( KiB/s): min=15680, max=18752, per=27.04%, avg=17233.78, stdev=976.60, samples=9 00:28:05.192 iops : min= 1960, max= 2344, avg=2154.22, stdev=122.08, samples=9 00:28:05.192 lat (usec) : 1000=0.09% 00:28:05.192 lat (msec) : 2=4.13%, 4=50.41%, 10=45.36% 00:28:05.192 cpu : usr=93.06%, sys=5.70%, ctx=14, majf=0, minf=0 00:28:05.192 IO depths : 1=0.1%, 2=3.7%, 4=62.9%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.192 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.192 issued rwts: total=10846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.192 filename1: (groupid=0, jobs=1): err= 0: pid=98498: Fri Jul 12 12:39:34 2024 00:28:05.192 read: IOPS=1861, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:28:05.192 slat (nsec): min=7583, max=93729, avg=19144.30, stdev=9380.62 00:28:05.192 clat (usec): min=1295, max=8751, avg=4236.59, stdev=974.90 00:28:05.192 lat (usec): min=1311, max=8783, avg=4255.73, stdev=973.82 00:28:05.192 clat percentiles (usec): 00:28:05.192 | 1.00th=[ 1778], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 3130], 00:28:05.192 | 30.00th=[ 3949], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4752], 00:28:05.192 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5342], 00:28:05.192 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7373], 99.95th=[ 7570], 00:28:05.192 | 99.99th=[ 8717] 00:28:05.192 bw ( KiB/s): min=12416, max=16736, per=23.65%, avg=15073.78, stdev=1508.56, samples=9 00:28:05.192 iops : min= 1552, max= 2092, avg=1884.22, stdev=188.57, samples=9 00:28:05.192 lat (msec) : 2=1.49%, 4=29.29%, 10=69.22% 00:28:05.192 cpu : usr=94.12%, sys=5.02%, ctx=14, majf=0, minf=9 00:28:05.192 IO depths : 1=0.1%, 2=13.6%, 4=56.9%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.192 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.192 issued rwts: total=9310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.192 00:28:05.192 Run status group 0 (all jobs): 00:28:05.192 READ: bw=62.2MiB/s (65.3MB/s), 14.1MiB/s-16.9MiB/s (14.8MB/s-17.8MB/s), io=311MiB (327MB), run=5001-5003msec 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 00:28:05.451 real 0m23.458s 00:28:05.451 user 2m2.528s 00:28:05.451 sys 0m8.732s 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 ************************************ 00:28:05.451 END TEST fio_dif_rand_params 00:28:05.451 ************************************ 00:28:05.451 12:39:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:05.451 12:39:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:05.451 12:39:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.451 12:39:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 ************************************ 00:28:05.451 START TEST fio_dif_digest 00:28:05.451 ************************************ 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 bdev_null0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.451 [2024-07-12 12:39:34.450604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.451 { 00:28:05.451 "params": { 00:28:05.451 "name": "Nvme$subsystem", 00:28:05.451 "trtype": "$TEST_TRANSPORT", 00:28:05.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.451 "adrfam": "ipv4", 00:28:05.451 "trsvcid": "$NVMF_PORT", 00:28:05.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.451 "hdgst": ${hdgst:-false}, 00:28:05.451 "ddgst": ${ddgst:-false} 00:28:05.451 }, 00:28:05.451 "method": "bdev_nvme_attach_controller" 00:28:05.451 } 00:28:05.451 EOF 00:28:05.451 )") 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.451 "params": { 00:28:05.451 "name": "Nvme0", 00:28:05.451 "trtype": "tcp", 00:28:05.451 "traddr": "10.0.0.2", 00:28:05.451 "adrfam": "ipv4", 00:28:05.451 "trsvcid": "4420", 00:28:05.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:05.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:05.451 "hdgst": true, 00:28:05.451 "ddgst": true 00:28:05.451 }, 00:28:05.451 "method": "bdev_nvme_attach_controller" 00:28:05.451 }' 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:05.451 12:39:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.710 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:05.710 ... 00:28:05.710 fio-3.35 00:28:05.710 Starting 3 threads 00:28:17.963 00:28:17.963 filename0: (groupid=0, jobs=1): err= 0: pid=98603: Fri Jul 12 12:39:45 2024 00:28:17.963 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10006msec) 00:28:17.963 slat (nsec): min=7789, max=47725, avg=10495.78, stdev=3404.37 00:28:17.963 clat (usec): min=8804, max=15832, avg=13405.06, stdev=259.03 00:28:17.963 lat (usec): min=8813, max=15865, avg=13415.56, stdev=259.37 00:28:17.963 clat percentiles (usec): 00:28:17.963 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:28:17.963 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13435], 60.00th=[13435], 00:28:17.963 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13698], 00:28:17.963 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15795], 99.95th=[15795], 00:28:17.963 | 99.99th=[15795] 00:28:17.963 bw ( KiB/s): min=27648, max=29184, per=33.31%, avg=28569.60, stdev=401.78, samples=20 00:28:17.963 iops : min= 216, max= 228, avg=223.20, stdev= 3.14, samples=20 00:28:17.963 lat (msec) : 10=0.13%, 20=99.87% 00:28:17.963 cpu : usr=93.69%, sys=5.75%, ctx=97, majf=0, minf=9 00:28:17.963 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.963 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:17.963 filename0: (groupid=0, jobs=1): err= 0: pid=98604: Fri Jul 12 12:39:45 2024 00:28:17.963 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10007msec) 00:28:17.963 slat (usec): min=8, max=125, avg=14.65, stdev=10.63 00:28:17.963 clat (usec): min=12137, max=13896, avg=13393.52, stdev=176.71 00:28:17.963 lat (usec): min=12146, max=13917, avg=13408.17, stdev=179.06 00:28:17.963 clat percentiles (usec): 00:28:17.963 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:28:17.963 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13435], 60.00th=[13435], 00:28:17.963 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13698], 00:28:17.963 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13829], 99.95th=[13829], 00:28:17.963 | 99.99th=[13960] 00:28:17.963 bw ( KiB/s): min=27648, max=29184, per=33.31%, avg=28569.60, stdev=401.78, samples=20 00:28:17.963 iops : min= 216, max= 228, avg=223.20, stdev= 3.14, samples=20 00:28:17.963 lat (msec) : 20=100.00% 00:28:17.963 cpu : usr=92.22%, sys=7.06%, ctx=15, majf=0, minf=0 00:28:17.963 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.963 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:17.963 filename0: (groupid=0, jobs=1): err= 0: pid=98605: Fri Jul 12 12:39:45 2024 00:28:17.963 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(279MiB/10007msec) 00:28:17.963 slat (nsec): min=7943, max=54240, avg=12425.69, stdev=5729.93 00:28:17.963 clat (usec): min=10930, max=14055, avg=13399.66, stdev=197.51 00:28:17.963 lat (usec): min=10938, max=14086, avg=13412.09, stdev=198.45 00:28:17.963 clat percentiles (usec): 00:28:17.963 | 1.00th=[13173], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:28:17.963 | 30.00th=[13304], 40.00th=[13304], 50.00th=[13435], 60.00th=[13435], 00:28:17.963 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13698], 00:28:17.963 | 99.00th=[13829], 99.50th=[13829], 99.90th=[13960], 99.95th=[14091], 00:28:17.963 | 99.99th=[14091] 00:28:17.963 bw ( KiB/s): min=27648, max=29184, per=33.31%, avg=28569.60, stdev=401.78, samples=20 00:28:17.963 iops : min= 216, max= 228, avg=223.20, stdev= 3.14, samples=20 00:28:17.963 lat (msec) : 20=100.00% 00:28:17.963 cpu : usr=93.50%, sys=5.92%, ctx=5, majf=0, minf=0 00:28:17.963 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.963 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.963 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:17.963 00:28:17.963 Run status group 0 (all jobs): 00:28:17.963 READ: bw=83.8MiB/s (87.8MB/s), 27.9MiB/s-27.9MiB/s (29.3MB/s-29.3MB/s), io=838MiB (879MB), run=10006-10007msec 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.963 12:39:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:17.964 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.964 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.964 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.964 00:28:17.964 real 0m10.982s 00:28:17.964 user 0m28.587s 00:28:17.964 sys 0m2.143s 00:28:17.964 ************************************ 00:28:17.964 END TEST fio_dif_digest 00:28:17.964 ************************************ 00:28:17.964 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:17.964 12:39:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:17.964 12:39:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:17.964 12:39:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.964 rmmod nvme_tcp 00:28:17.964 rmmod nvme_fabrics 00:28:17.964 rmmod nvme_keyring 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97853 ']' 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97853 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97853 ']' 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97853 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97853 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:17.964 killing process with pid 97853 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97853' 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97853 00:28:17.964 12:39:45 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97853 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:17.964 12:39:45 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:17.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:17.964 Waiting for block devices as requested 00:28:17.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.964 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.964 12:39:46 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.964 12:39:46 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.964 12:39:46 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.964 12:39:46 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.964 12:39:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.964 12:39:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.964 12:39:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.964 12:39:46 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:17.964 00:28:17.964 real 0m59.797s 00:28:17.964 user 3m46.824s 00:28:17.964 sys 0m20.102s 00:28:17.964 12:39:46 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:17.964 12:39:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:17.964 ************************************ 00:28:17.964 END TEST nvmf_dif 00:28:17.964 ************************************ 00:28:17.964 12:39:46 -- common/autotest_common.sh@1142 -- # return 0 00:28:17.964 12:39:46 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.964 12:39:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:17.964 12:39:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.964 12:39:46 -- common/autotest_common.sh@10 -- # set +x 00:28:17.964 ************************************ 00:28:17.964 START TEST nvmf_abort_qd_sizes 00:28:17.964 ************************************ 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.964 * Looking for test storage... 00:28:17.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:17.964 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:17.965 Cannot find device "nvmf_tgt_br" 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:17.965 Cannot find device "nvmf_tgt_br2" 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:17.965 Cannot find device "nvmf_tgt_br" 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:17.965 Cannot find device "nvmf_tgt_br2" 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:17.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:17.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:17.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:28:17.965 00:28:17.965 --- 10.0.0.2 ping statistics --- 00:28:17.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.965 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:17.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:17.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:28:17.965 00:28:17.965 --- 10.0.0.3 ping statistics --- 00:28:17.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.965 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:17.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:28:17.965 00:28:17.965 --- 10.0.0.1 ping statistics --- 00:28:17.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.965 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:17.965 12:39:46 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:18.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.789 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.789 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99192 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99192 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99192 ']' 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:18.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:18.789 12:39:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:18.789 [2024-07-12 12:39:47.813181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:18.789 [2024-07-12 12:39:47.813270] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.047 [2024-07-12 12:39:47.955914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.047 [2024-07-12 12:39:48.053952] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.047 [2024-07-12 12:39:48.054012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.047 [2024-07-12 12:39:48.054034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.047 [2024-07-12 12:39:48.054050] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.047 [2024-07-12 12:39:48.054064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.047 [2024-07-12 12:39:48.054206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.047 [2024-07-12 12:39:48.054353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.047 [2024-07-12 12:39:48.054450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.047 [2024-07-12 12:39:48.054459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.047 [2024-07-12 12:39:48.116540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:28:20.034 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.035 12:39:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 ************************************ 00:28:20.035 START TEST spdk_target_abort 00:28:20.035 ************************************ 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 spdk_targetn1 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 [2024-07-12 12:39:48.984297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.035 12:39:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.035 [2024-07-12 12:39:49.016418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:20.035 12:39:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.312 Initializing NVMe Controllers 00:28:23.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:23.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:23.312 Initialization complete. Launching workers. 00:28:23.312 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10834, failed: 0 00:28:23.312 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1018, failed to submit 9816 00:28:23.312 success 720, unsuccess 298, failed 0 00:28:23.312 12:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.312 12:39:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.600 Initializing NVMe Controllers 00:28:26.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:26.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:26.600 Initialization complete. Launching workers. 00:28:26.600 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8889, failed: 0 00:28:26.600 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1141, failed to submit 7748 00:28:26.600 success 389, unsuccess 752, failed 0 00:28:26.600 12:39:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:26.600 12:39:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.877 Initializing NVMe Controllers 00:28:29.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:29.877 Initialization complete. Launching workers. 00:28:29.877 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31242, failed: 0 00:28:29.877 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2246, failed to submit 28996 00:28:29.877 success 445, unsuccess 1801, failed 0 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.877 12:39:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99192 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99192 ']' 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99192 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99192 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99192' 00:28:30.135 killing process with pid 99192 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99192 00:28:30.135 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99192 00:28:30.392 00:28:30.392 real 0m10.476s 00:28:30.392 user 0m42.144s 00:28:30.392 sys 0m2.226s 00:28:30.392 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.392 12:39:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:30.392 ************************************ 00:28:30.392 END TEST spdk_target_abort 00:28:30.392 ************************************ 00:28:30.392 12:39:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:30.392 12:39:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:30.392 12:39:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:30.392 12:39:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.392 12:39:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:30.392 ************************************ 00:28:30.392 START TEST kernel_target_abort 00:28:30.393 ************************************ 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:30.393 12:39:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:30.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:30.958 Waiting for block devices as requested 00:28:30.958 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.958 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:30.958 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:31.216 No valid GPT data, bailing 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:31.216 No valid GPT data, bailing 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:31.216 No valid GPT data, bailing 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:31.216 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:31.216 No valid GPT data, bailing 00:28:31.473 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:31.473 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 --hostid=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 -a 10.0.0.1 -t tcp -s 4420 00:28:31.474 00:28:31.474 Discovery Log Number of Records 2, Generation counter 2 00:28:31.474 =====Discovery Log Entry 0====== 00:28:31.474 trtype: tcp 00:28:31.474 adrfam: ipv4 00:28:31.474 subtype: current discovery subsystem 00:28:31.474 treq: not specified, sq flow control disable supported 00:28:31.474 portid: 1 00:28:31.474 trsvcid: 4420 00:28:31.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:31.474 traddr: 10.0.0.1 00:28:31.474 eflags: none 00:28:31.474 sectype: none 00:28:31.474 =====Discovery Log Entry 1====== 00:28:31.474 trtype: tcp 00:28:31.474 adrfam: ipv4 00:28:31.474 subtype: nvme subsystem 00:28:31.474 treq: not specified, sq flow control disable supported 00:28:31.474 portid: 1 00:28:31.474 trsvcid: 4420 00:28:31.474 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:31.474 traddr: 10.0.0.1 00:28:31.474 eflags: none 00:28:31.474 sectype: none 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.474 12:40:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.756 Initializing NVMe Controllers 00:28:34.756 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:34.756 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:34.756 Initialization complete. Launching workers. 00:28:34.756 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35682, failed: 0 00:28:34.756 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35682, failed to submit 0 00:28:34.756 success 0, unsuccess 35682, failed 0 00:28:34.756 12:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:34.756 12:40:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:38.035 Initializing NVMe Controllers 00:28:38.035 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:38.035 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:38.035 Initialization complete. Launching workers. 00:28:38.035 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68644, failed: 0 00:28:38.035 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29337, failed to submit 39307 00:28:38.035 success 0, unsuccess 29337, failed 0 00:28:38.035 12:40:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:38.035 12:40:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.354 Initializing NVMe Controllers 00:28:41.354 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:41.354 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:41.354 Initialization complete. Launching workers. 00:28:41.354 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82093, failed: 0 00:28:41.354 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20490, failed to submit 61603 00:28:41.354 success 0, unsuccess 20490, failed 0 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:41.354 12:40:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:41.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:42.543 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:42.543 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:42.543 ************************************ 00:28:42.543 END TEST kernel_target_abort 00:28:42.543 ************************************ 00:28:42.543 00:28:42.543 real 0m12.144s 00:28:42.543 user 0m6.299s 00:28:42.543 sys 0m3.240s 00:28:42.543 12:40:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.543 12:40:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:42.543 12:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:42.543 12:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:42.543 12:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:42.543 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:42.543 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:42.800 rmmod nvme_tcp 00:28:42.800 rmmod nvme_fabrics 00:28:42.800 rmmod nvme_keyring 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99192 ']' 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99192 00:28:42.800 Process with pid 99192 is not found 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99192 ']' 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99192 00:28:42.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99192) - No such process 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99192 is not found' 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:42.800 12:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:43.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.058 Waiting for block devices as requested 00:28:43.058 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:43.316 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:43.316 ************************************ 00:28:43.316 END TEST nvmf_abort_qd_sizes 00:28:43.316 ************************************ 00:28:43.316 00:28:43.316 real 0m25.878s 00:28:43.316 user 0m49.626s 00:28:43.316 sys 0m6.827s 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.316 12:40:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:43.316 12:40:12 -- common/autotest_common.sh@1142 -- # return 0 00:28:43.316 12:40:12 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:43.316 12:40:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:43.316 12:40:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.316 12:40:12 -- common/autotest_common.sh@10 -- # set +x 00:28:43.316 ************************************ 00:28:43.316 START TEST keyring_file 00:28:43.316 ************************************ 00:28:43.316 12:40:12 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:43.574 * Looking for test storage... 00:28:43.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:43.574 12:40:12 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.574 12:40:12 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.574 12:40:12 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.574 12:40:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.574 12:40:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.574 12:40:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.574 12:40:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:43.574 12:40:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:43.574 12:40:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Zfm4NqgsSr 00:28:43.574 12:40:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:43.574 12:40:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zfm4NqgsSr 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Zfm4NqgsSr 00:28:43.575 12:40:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Zfm4NqgsSr 00:28:43.575 12:40:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BJwGHs6Var 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:43.575 12:40:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BJwGHs6Var 00:28:43.575 12:40:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BJwGHs6Var 00:28:43.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.575 12:40:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BJwGHs6Var 00:28:43.575 12:40:12 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:43.575 12:40:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=100048 00:28:43.575 12:40:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100048 00:28:43.575 12:40:12 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100048 ']' 00:28:43.575 12:40:12 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.575 12:40:12 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:43.575 12:40:12 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.575 12:40:12 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:43.575 12:40:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:43.575 [2024-07-12 12:40:12.646149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:43.575 [2024-07-12 12:40:12.647113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100048 ] 00:28:43.832 [2024-07-12 12:40:12.788227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.832 [2024-07-12 12:40:12.885142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.088 [2024-07-12 12:40:12.942781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:44.667 12:40:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:44.667 [2024-07-12 12:40:13.672176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.667 null0 00:28:44.667 [2024-07-12 12:40:13.704134] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:44.667 [2024-07-12 12:40:13.704490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:44.667 [2024-07-12 12:40:13.712132] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.667 12:40:13 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:44.667 [2024-07-12 12:40:13.724130] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:44.667 request: 00:28:44.667 { 00:28:44.667 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.667 "secure_channel": false, 00:28:44.667 "listen_address": { 00:28:44.667 "trtype": "tcp", 00:28:44.667 "traddr": "127.0.0.1", 00:28:44.667 "trsvcid": "4420" 00:28:44.667 }, 00:28:44.667 "method": "nvmf_subsystem_add_listener", 00:28:44.667 "req_id": 1 00:28:44.667 } 00:28:44.667 Got JSON-RPC error response 00:28:44.667 response: 00:28:44.667 { 00:28:44.667 "code": -32602, 00:28:44.667 "message": "Invalid parameters" 00:28:44.667 } 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:44.667 12:40:13 keyring_file -- keyring/file.sh@46 -- # bperfpid=100065 00:28:44.667 12:40:13 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100065 /var/tmp/bperf.sock 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100065 ']' 00:28:44.667 12:40:13 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.667 12:40:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:44.925 [2024-07-12 12:40:13.780813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:44.925 [2024-07-12 12:40:13.781113] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100065 ] 00:28:44.925 [2024-07-12 12:40:13.917366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.182 [2024-07-12 12:40:14.008273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.182 [2024-07-12 12:40:14.065938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:45.747 12:40:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.747 12:40:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:45.747 12:40:14 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:45.747 12:40:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:46.004 12:40:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BJwGHs6Var 00:28:46.004 12:40:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BJwGHs6Var 00:28:46.262 12:40:15 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:46.262 12:40:15 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:46.262 12:40:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.262 12:40:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:46.262 12:40:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.519 12:40:15 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Zfm4NqgsSr == \/\t\m\p\/\t\m\p\.\Z\f\m\4\N\q\g\s\S\r ]] 00:28:46.519 12:40:15 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:46.519 12:40:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:46.519 12:40:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:46.519 12:40:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.519 12:40:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.776 12:40:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BJwGHs6Var == \/\t\m\p\/\t\m\p\.\B\J\w\G\H\s\6\V\a\r ]] 00:28:46.776 12:40:15 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:46.777 12:40:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:46.777 12:40:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:46.777 12:40:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:46.777 12:40:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.777 12:40:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.034 12:40:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:47.034 12:40:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:47.034 12:40:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:47.034 12:40:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:47.034 12:40:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.034 12:40:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.034 12:40:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:47.293 12:40:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:47.293 12:40:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:47.293 12:40:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:47.551 [2024-07-12 12:40:16.502233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:47.551 nvme0n1 00:28:47.551 12:40:16 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:47.551 12:40:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:47.551 12:40:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:47.551 12:40:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.551 12:40:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:47.551 12:40:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.808 12:40:16 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:47.808 12:40:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:47.808 12:40:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:47.808 12:40:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:47.808 12:40:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.808 12:40:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.808 12:40:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:48.065 12:40:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:48.065 12:40:17 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.323 Running I/O for 1 seconds... 00:28:49.328 00:28:49.328 Latency(us) 00:28:49.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.328 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:49.328 nvme0n1 : 1.01 11703.94 45.72 0.00 0.00 10901.30 5779.08 18707.55 00:28:49.328 =================================================================================================================== 00:28:49.328 Total : 11703.94 45.72 0.00 0.00 10901.30 5779.08 18707.55 00:28:49.328 0 00:28:49.328 12:40:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:49.328 12:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:49.585 12:40:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:49.585 12:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:49.585 12:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.585 12:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.585 12:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:49.585 12:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.842 12:40:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:49.842 12:40:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:49.842 12:40:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:49.842 12:40:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.842 12:40:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.842 12:40:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:49.842 12:40:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.100 12:40:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:50.100 12:40:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.100 12:40:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:50.100 12:40:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:50.358 [2024-07-12 12:40:19.289226] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:50.358 [2024-07-12 12:40:19.289548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc67e0 (107): Transport endpoint is not connected 00:28:50.358 [2024-07-12 12:40:19.290538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc67e0 (9): Bad file descriptor 00:28:50.358 [2024-07-12 12:40:19.291535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:50.358 [2024-07-12 12:40:19.291562] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:50.358 [2024-07-12 12:40:19.291574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:50.358 request: 00:28:50.358 { 00:28:50.358 "name": "nvme0", 00:28:50.358 "trtype": "tcp", 00:28:50.358 "traddr": "127.0.0.1", 00:28:50.358 "adrfam": "ipv4", 00:28:50.358 "trsvcid": "4420", 00:28:50.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.358 "prchk_reftag": false, 00:28:50.358 "prchk_guard": false, 00:28:50.358 "hdgst": false, 00:28:50.358 "ddgst": false, 00:28:50.358 "psk": "key1", 00:28:50.358 "method": "bdev_nvme_attach_controller", 00:28:50.358 "req_id": 1 00:28:50.358 } 00:28:50.358 Got JSON-RPC error response 00:28:50.358 response: 00:28:50.358 { 00:28:50.358 "code": -5, 00:28:50.358 "message": "Input/output error" 00:28:50.358 } 00:28:50.358 12:40:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:50.358 12:40:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:50.358 12:40:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:50.358 12:40:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:50.358 12:40:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:50.358 12:40:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:50.358 12:40:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:50.358 12:40:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:50.358 12:40:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.358 12:40:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:50.616 12:40:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:50.616 12:40:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:50.616 12:40:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:50.616 12:40:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:50.616 12:40:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:50.616 12:40:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:50.616 12:40:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:50.873 12:40:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:50.873 12:40:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:50.873 12:40:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:51.131 12:40:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:51.131 12:40:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:51.390 12:40:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:51.390 12:40:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:51.390 12:40:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.649 12:40:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:51.649 12:40:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Zfm4NqgsSr 00:28:51.649 12:40:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.649 12:40:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:51.650 12:40:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:51.908 [2024-07-12 12:40:20.947935] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zfm4NqgsSr': 0100660 00:28:51.908 [2024-07-12 12:40:20.947989] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:51.908 request: 00:28:51.908 { 00:28:51.908 "name": "key0", 00:28:51.908 "path": "/tmp/tmp.Zfm4NqgsSr", 00:28:51.908 "method": "keyring_file_add_key", 00:28:51.908 "req_id": 1 00:28:51.908 } 00:28:51.908 Got JSON-RPC error response 00:28:51.908 response: 00:28:51.908 { 00:28:51.908 "code": -1, 00:28:51.908 "message": "Operation not permitted" 00:28:51.908 } 00:28:51.908 12:40:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:51.908 12:40:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.908 12:40:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.908 12:40:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.908 12:40:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Zfm4NqgsSr 00:28:51.908 12:40:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:51.908 12:40:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zfm4NqgsSr 00:28:52.167 12:40:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Zfm4NqgsSr 00:28:52.167 12:40:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:52.167 12:40:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:52.167 12:40:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.167 12:40:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:52.167 12:40:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.167 12:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.735 12:40:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:52.735 12:40:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.735 12:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.735 [2024-07-12 12:40:21.780120] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Zfm4NqgsSr': No such file or directory 00:28:52.735 [2024-07-12 12:40:21.780177] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:52.735 [2024-07-12 12:40:21.780204] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:52.735 [2024-07-12 12:40:21.780213] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:52.735 [2024-07-12 12:40:21.780222] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:52.735 request: 00:28:52.735 { 00:28:52.735 "name": "nvme0", 00:28:52.735 "trtype": "tcp", 00:28:52.735 "traddr": "127.0.0.1", 00:28:52.735 "adrfam": "ipv4", 00:28:52.735 "trsvcid": "4420", 00:28:52.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.735 "prchk_reftag": false, 00:28:52.735 "prchk_guard": false, 00:28:52.735 "hdgst": false, 00:28:52.735 "ddgst": false, 00:28:52.735 "psk": "key0", 00:28:52.735 "method": "bdev_nvme_attach_controller", 00:28:52.735 "req_id": 1 00:28:52.735 } 00:28:52.735 Got JSON-RPC error response 00:28:52.735 response: 00:28:52.735 { 00:28:52.735 "code": -19, 00:28:52.735 "message": "No such device" 00:28:52.735 } 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:52.735 12:40:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:52.735 12:40:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:52.735 12:40:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:53.065 12:40:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aaMdIa7Zku 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:53.065 12:40:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:53.065 12:40:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:53.065 12:40:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:53.065 12:40:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:53.065 12:40:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:53.065 12:40:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aaMdIa7Zku 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aaMdIa7Zku 00:28:53.065 12:40:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.aaMdIa7Zku 00:28:53.065 12:40:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aaMdIa7Zku 00:28:53.065 12:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aaMdIa7Zku 00:28:53.325 12:40:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.325 12:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.583 nvme0n1 00:28:53.583 12:40:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:53.583 12:40:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:53.583 12:40:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.583 12:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.583 12:40:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.584 12:40:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.148 12:40:22 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:54.148 12:40:22 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:54.148 12:40:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:54.148 12:40:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:54.148 12:40:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:54.148 12:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.148 12:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.148 12:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.405 12:40:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:54.405 12:40:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:54.405 12:40:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:54.405 12:40:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.405 12:40:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.405 12:40:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.405 12:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.661 12:40:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:54.661 12:40:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:54.661 12:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:54.917 12:40:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:54.917 12:40:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:54.917 12:40:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.174 12:40:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:55.174 12:40:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aaMdIa7Zku 00:28:55.174 12:40:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aaMdIa7Zku 00:28:55.432 12:40:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BJwGHs6Var 00:28:55.432 12:40:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BJwGHs6Var 00:28:55.689 12:40:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:55.689 12:40:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:55.947 nvme0n1 00:28:55.947 12:40:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:55.947 12:40:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:56.512 12:40:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:56.512 "subsystems": [ 00:28:56.512 { 00:28:56.512 "subsystem": "keyring", 00:28:56.512 "config": [ 00:28:56.512 { 00:28:56.512 "method": "keyring_file_add_key", 00:28:56.512 "params": { 00:28:56.512 "name": "key0", 00:28:56.512 "path": "/tmp/tmp.aaMdIa7Zku" 00:28:56.512 } 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "method": "keyring_file_add_key", 00:28:56.512 "params": { 00:28:56.512 "name": "key1", 00:28:56.512 "path": "/tmp/tmp.BJwGHs6Var" 00:28:56.512 } 00:28:56.512 } 00:28:56.512 ] 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "subsystem": "iobuf", 00:28:56.512 "config": [ 00:28:56.512 { 00:28:56.512 "method": "iobuf_set_options", 00:28:56.512 "params": { 00:28:56.512 "small_pool_count": 8192, 00:28:56.512 "large_pool_count": 1024, 00:28:56.512 "small_bufsize": 8192, 00:28:56.512 "large_bufsize": 135168 00:28:56.512 } 00:28:56.512 } 00:28:56.512 ] 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "subsystem": "sock", 00:28:56.512 "config": [ 00:28:56.512 { 00:28:56.512 "method": "sock_set_default_impl", 00:28:56.512 "params": { 00:28:56.512 "impl_name": "uring" 00:28:56.512 } 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "method": "sock_impl_set_options", 00:28:56.512 "params": { 00:28:56.512 "impl_name": "ssl", 00:28:56.512 "recv_buf_size": 4096, 00:28:56.512 "send_buf_size": 4096, 00:28:56.512 "enable_recv_pipe": true, 00:28:56.512 "enable_quickack": false, 00:28:56.512 "enable_placement_id": 0, 00:28:56.512 "enable_zerocopy_send_server": true, 00:28:56.512 "enable_zerocopy_send_client": false, 00:28:56.512 "zerocopy_threshold": 0, 00:28:56.512 "tls_version": 0, 00:28:56.512 "enable_ktls": false 00:28:56.512 } 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "method": "sock_impl_set_options", 00:28:56.512 "params": { 00:28:56.512 "impl_name": "posix", 00:28:56.512 "recv_buf_size": 2097152, 00:28:56.512 "send_buf_size": 2097152, 00:28:56.512 "enable_recv_pipe": true, 00:28:56.512 "enable_quickack": false, 00:28:56.512 "enable_placement_id": 0, 00:28:56.512 "enable_zerocopy_send_server": true, 00:28:56.513 "enable_zerocopy_send_client": false, 00:28:56.513 "zerocopy_threshold": 0, 00:28:56.513 "tls_version": 0, 00:28:56.513 "enable_ktls": false 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "sock_impl_set_options", 00:28:56.513 "params": { 00:28:56.513 "impl_name": "uring", 00:28:56.513 "recv_buf_size": 2097152, 00:28:56.513 "send_buf_size": 2097152, 00:28:56.513 "enable_recv_pipe": true, 00:28:56.513 "enable_quickack": false, 00:28:56.513 "enable_placement_id": 0, 00:28:56.513 "enable_zerocopy_send_server": false, 00:28:56.513 "enable_zerocopy_send_client": false, 00:28:56.513 "zerocopy_threshold": 0, 00:28:56.513 "tls_version": 0, 00:28:56.513 "enable_ktls": false 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "vmd", 00:28:56.513 "config": [] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "accel", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "accel_set_options", 00:28:56.513 "params": { 00:28:56.513 "small_cache_size": 128, 00:28:56.513 "large_cache_size": 16, 00:28:56.513 "task_count": 2048, 00:28:56.513 "sequence_count": 2048, 00:28:56.513 "buf_count": 2048 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "bdev", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "bdev_set_options", 00:28:56.513 "params": { 00:28:56.513 "bdev_io_pool_size": 65535, 00:28:56.513 "bdev_io_cache_size": 256, 00:28:56.513 "bdev_auto_examine": true, 00:28:56.513 "iobuf_small_cache_size": 128, 00:28:56.513 "iobuf_large_cache_size": 16 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_raid_set_options", 00:28:56.513 "params": { 00:28:56.513 "process_window_size_kb": 1024 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_iscsi_set_options", 00:28:56.513 "params": { 00:28:56.513 "timeout_sec": 30 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_nvme_set_options", 00:28:56.513 "params": { 00:28:56.513 "action_on_timeout": "none", 00:28:56.513 "timeout_us": 0, 00:28:56.513 "timeout_admin_us": 0, 00:28:56.513 "keep_alive_timeout_ms": 10000, 00:28:56.513 "arbitration_burst": 0, 00:28:56.513 "low_priority_weight": 0, 00:28:56.513 "medium_priority_weight": 0, 00:28:56.513 "high_priority_weight": 0, 00:28:56.513 "nvme_adminq_poll_period_us": 10000, 00:28:56.513 "nvme_ioq_poll_period_us": 0, 00:28:56.513 "io_queue_requests": 512, 00:28:56.513 "delay_cmd_submit": true, 00:28:56.513 "transport_retry_count": 4, 00:28:56.513 "bdev_retry_count": 3, 00:28:56.513 "transport_ack_timeout": 0, 00:28:56.513 "ctrlr_loss_timeout_sec": 0, 00:28:56.513 "reconnect_delay_sec": 0, 00:28:56.513 "fast_io_fail_timeout_sec": 0, 00:28:56.513 "disable_auto_failback": false, 00:28:56.513 "generate_uuids": false, 00:28:56.513 "transport_tos": 0, 00:28:56.513 "nvme_error_stat": false, 00:28:56.513 "rdma_srq_size": 0, 00:28:56.513 "io_path_stat": false, 00:28:56.513 "allow_accel_sequence": false, 00:28:56.513 "rdma_max_cq_size": 0, 00:28:56.513 "rdma_cm_event_timeout_ms": 0, 00:28:56.513 "dhchap_digests": [ 00:28:56.513 "sha256", 00:28:56.513 "sha384", 00:28:56.513 "sha512" 00:28:56.513 ], 00:28:56.513 "dhchap_dhgroups": [ 00:28:56.513 "null", 00:28:56.513 "ffdhe2048", 00:28:56.513 "ffdhe3072", 00:28:56.513 "ffdhe4096", 00:28:56.513 "ffdhe6144", 00:28:56.513 "ffdhe8192" 00:28:56.513 ] 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_nvme_attach_controller", 00:28:56.513 "params": { 00:28:56.513 "name": "nvme0", 00:28:56.513 "trtype": "TCP", 00:28:56.513 "adrfam": "IPv4", 00:28:56.513 "traddr": "127.0.0.1", 00:28:56.513 "trsvcid": "4420", 00:28:56.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.513 "prchk_reftag": false, 00:28:56.513 "prchk_guard": false, 00:28:56.513 "ctrlr_loss_timeout_sec": 0, 00:28:56.513 "reconnect_delay_sec": 0, 00:28:56.513 "fast_io_fail_timeout_sec": 0, 00:28:56.513 "psk": "key0", 00:28:56.513 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.513 "hdgst": false, 00:28:56.513 "ddgst": false 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_nvme_set_hotplug", 00:28:56.513 "params": { 00:28:56.513 "period_us": 100000, 00:28:56.513 "enable": false 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_wait_for_examine" 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "nbd", 00:28:56.513 "config": [] 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }' 00:28:56.513 12:40:25 keyring_file -- keyring/file.sh@114 -- # killprocess 100065 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100065 ']' 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100065 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100065 00:28:56.513 killing process with pid 100065 00:28:56.513 Received shutdown signal, test time was about 1.000000 seconds 00:28:56.513 00:28:56.513 Latency(us) 00:28:56.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.513 =================================================================================================================== 00:28:56.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100065' 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@967 -- # kill 100065 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@972 -- # wait 100065 00:28:56.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.513 12:40:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=100309 00:28:56.513 12:40:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100309 /var/tmp/bperf.sock 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100309 ']' 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.513 12:40:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:56.513 12:40:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:56.513 "subsystems": [ 00:28:56.513 { 00:28:56.513 "subsystem": "keyring", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "keyring_file_add_key", 00:28:56.513 "params": { 00:28:56.513 "name": "key0", 00:28:56.513 "path": "/tmp/tmp.aaMdIa7Zku" 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "keyring_file_add_key", 00:28:56.513 "params": { 00:28:56.513 "name": "key1", 00:28:56.513 "path": "/tmp/tmp.BJwGHs6Var" 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "iobuf", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "iobuf_set_options", 00:28:56.513 "params": { 00:28:56.513 "small_pool_count": 8192, 00:28:56.513 "large_pool_count": 1024, 00:28:56.513 "small_bufsize": 8192, 00:28:56.513 "large_bufsize": 135168 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "sock", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "sock_set_default_impl", 00:28:56.513 "params": { 00:28:56.513 "impl_name": "uring" 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "sock_impl_set_options", 00:28:56.513 "params": { 00:28:56.513 "impl_name": "ssl", 00:28:56.514 "recv_buf_size": 4096, 00:28:56.514 "send_buf_size": 4096, 00:28:56.514 "enable_recv_pipe": true, 00:28:56.514 "enable_quickack": false, 00:28:56.514 "enable_placement_id": 0, 00:28:56.514 "enable_zerocopy_send_server": true, 00:28:56.514 "enable_zerocopy_send_client": false, 00:28:56.514 "zerocopy_threshold": 0, 00:28:56.514 "tls_version": 0, 00:28:56.514 "enable_ktls": false 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "sock_impl_set_options", 00:28:56.514 "params": { 00:28:56.514 "impl_name": "posix", 00:28:56.514 "recv_buf_size": 2097152, 00:28:56.514 "send_buf_size": 2097152, 00:28:56.514 "enable_recv_pipe": true, 00:28:56.514 "enable_quickack": false, 00:28:56.514 "enable_placement_id": 0, 00:28:56.514 "enable_zerocopy_send_server": true, 00:28:56.514 "enable_zerocopy_send_client": false, 00:28:56.514 "zerocopy_threshold": 0, 00:28:56.514 "tls_version": 0, 00:28:56.514 "enable_ktls": false 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "sock_impl_set_options", 00:28:56.514 "params": { 00:28:56.514 "impl_name": "uring", 00:28:56.514 "recv_buf_size": 2097152, 00:28:56.514 "send_buf_size": 2097152, 00:28:56.514 "enable_recv_pipe": true, 00:28:56.514 "enable_quickack": false, 00:28:56.514 "enable_placement_id": 0, 00:28:56.514 "enable_zerocopy_send_server": false, 00:28:56.514 "enable_zerocopy_send_client": false, 00:28:56.514 "zerocopy_threshold": 0, 00:28:56.514 "tls_version": 0, 00:28:56.514 "enable_ktls": false 00:28:56.514 } 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "vmd", 00:28:56.514 "config": [] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "accel", 00:28:56.514 "config": [ 00:28:56.514 { 00:28:56.514 "method": "accel_set_options", 00:28:56.514 "params": { 00:28:56.514 "small_cache_size": 128, 00:28:56.514 "large_cache_size": 16, 00:28:56.514 "task_count": 2048, 00:28:56.514 "sequence_count": 2048, 00:28:56.514 "buf_count": 2048 00:28:56.514 } 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "bdev", 00:28:56.514 "config": [ 00:28:56.514 { 00:28:56.514 "method": "bdev_set_options", 00:28:56.514 "params": { 00:28:56.514 "bdev_io_pool_size": 65535, 00:28:56.514 "bdev_io_cache_size": 256, 00:28:56.514 "bdev_auto_examine": true, 00:28:56.514 "iobuf_small_cache_size": 128, 00:28:56.514 "iobuf_large_cache_size": 16 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "bdev_raid_set_options", 00:28:56.514 "params": { 00:28:56.514 "process_window_size_kb": 1024 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "bdev_iscsi_set_options", 00:28:56.514 "params": { 00:28:56.514 "timeout_sec": 30 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "bdev_nvme_set_options", 00:28:56.514 "params": { 00:28:56.514 "action_on_timeout": "none", 00:28:56.514 "timeout_us": 0, 00:28:56.514 "timeout_admin_us": 0, 00:28:56.514 "keep_alive_timeout_ms": 10000, 00:28:56.514 "arbitration_burst": 0, 00:28:56.514 "low_priority_weight": 0, 00:28:56.514 "medium_priority_weight": 0, 00:28:56.514 "high_priority_weight": 0, 00:28:56.514 "nvme_adminq_poll_period_us": 10000, 00:28:56.514 "nvme_ioq_poll_period_us": 0, 00:28:56.514 "io_queue_requests": 512, 00:28:56.514 "delay_cm 12:40:25 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:56.514 d_submit": true, 00:28:56.514 "transport_retry_count": 4, 00:28:56.514 "bdev_retry_count": 3, 00:28:56.514 "transport_ack_timeout": 0, 00:28:56.514 "ctrlr_loss_timeout_sec": 0, 00:28:56.514 "reconnect_delay_sec": 0, 00:28:56.514 "fast_io_fail_timeout_sec": 0, 00:28:56.514 "disable_auto_failback": false, 00:28:56.514 "generate_uuids": false, 00:28:56.514 "transport_tos": 0, 00:28:56.514 "nvme_error_stat": false, 00:28:56.514 "rdma_srq_size": 0, 00:28:56.514 "io_path_stat": false, 00:28:56.514 "allow_accel_sequence": false, 00:28:56.514 "rdma_max_cq_size": 0, 00:28:56.514 "rdma_cm_event_timeout_ms": 0, 00:28:56.514 "dhchap_digests": [ 00:28:56.514 "sha256", 00:28:56.514 "sha384", 00:28:56.514 "sha512" 00:28:56.514 ], 00:28:56.514 "dhchap_dhgroups": [ 00:28:56.514 "null", 00:28:56.514 "ffdhe2048", 00:28:56.514 "ffdhe3072", 00:28:56.514 "ffdhe4096", 00:28:56.514 "ffdhe6144", 00:28:56.514 "ffdhe8192" 00:28:56.514 ] 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "bdev_nvme_attach_controller", 00:28:56.514 "params": { 00:28:56.514 "name": "nvme0", 00:28:56.514 "trtype": "TCP", 00:28:56.514 "adrfam": "IPv4", 00:28:56.514 "traddr": "127.0.0.1", 00:28:56.514 "trsvcid": "4420", 00:28:56.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.514 "prchk_reftag": false, 00:28:56.514 "prchk_guard": false, 00:28:56.514 "ctrlr_loss_timeout_sec": 0, 00:28:56.514 "reconnect_delay_sec": 0, 00:28:56.514 "fast_io_fail_timeout_sec": 0, 00:28:56.514 "psk": "key0", 00:28:56.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.514 "hdgst": false, 00:28:56.514 "ddgst": false 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "bdev_nvme_set_hotplug", 00:28:56.514 "params": { 00:28:56.514 "period_us": 100000, 00:28:56.514 "enable": false 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "bdev_wait_for_examine" 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "nbd", 00:28:56.514 "config": [] 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }' 00:28:56.772 [2024-07-12 12:40:25.599719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:56.772 [2024-07-12 12:40:25.600415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100309 ] 00:28:56.772 [2024-07-12 12:40:25.733056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.772 [2024-07-12 12:40:25.834182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.029 [2024-07-12 12:40:25.970093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:57.029 [2024-07-12 12:40:26.023351] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:57.594 12:40:26 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.594 12:40:26 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:57.594 12:40:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:57.594 12:40:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.594 12:40:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:57.853 12:40:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:57.853 12:40:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:57.853 12:40:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:57.853 12:40:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:57.853 12:40:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:57.853 12:40:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:57.853 12:40:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.112 12:40:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:58.112 12:40:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:58.112 12:40:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:58.112 12:40:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:58.112 12:40:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.112 12:40:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:58.112 12:40:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.369 12:40:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:58.369 12:40:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:58.369 12:40:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:58.369 12:40:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:58.626 12:40:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:58.626 12:40:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:58.626 12:40:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.aaMdIa7Zku /tmp/tmp.BJwGHs6Var 00:28:58.626 12:40:27 keyring_file -- keyring/file.sh@20 -- # killprocess 100309 00:28:58.626 12:40:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100309 ']' 00:28:58.626 12:40:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100309 00:28:58.626 12:40:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:58.626 12:40:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.626 12:40:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100309 00:28:58.883 killing process with pid 100309 00:28:58.883 Received shutdown signal, test time was about 1.000000 seconds 00:28:58.883 00:28:58.883 Latency(us) 00:28:58.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.883 =================================================================================================================== 00:28:58.883 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100309' 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@967 -- # kill 100309 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@972 -- # wait 100309 00:28:58.883 12:40:27 keyring_file -- keyring/file.sh@21 -- # killprocess 100048 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100048 ']' 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100048 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100048 00:28:58.883 killing process with pid 100048 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100048' 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@967 -- # kill 100048 00:28:58.883 [2024-07-12 12:40:27.946234] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:58.883 12:40:27 keyring_file -- common/autotest_common.sh@972 -- # wait 100048 00:28:59.448 ************************************ 00:28:59.448 END TEST keyring_file 00:28:59.448 ************************************ 00:28:59.448 00:28:59.448 real 0m15.957s 00:28:59.448 user 0m39.809s 00:28:59.448 sys 0m3.114s 00:28:59.448 12:40:28 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:59.448 12:40:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:59.448 12:40:28 -- common/autotest_common.sh@1142 -- # return 0 00:28:59.448 12:40:28 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:59.448 12:40:28 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:59.448 12:40:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:59.448 12:40:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.448 12:40:28 -- common/autotest_common.sh@10 -- # set +x 00:28:59.448 ************************************ 00:28:59.448 START TEST keyring_linux 00:28:59.448 ************************************ 00:28:59.448 12:40:28 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:59.448 * Looking for test storage... 00:28:59.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:59.448 12:40:28 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:59.448 12:40:28 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=2eac55f6-d7d4-4e29-a8c9-3bef4a960b93 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.448 12:40:28 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.448 12:40:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.448 12:40:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.448 12:40:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.448 12:40:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.448 12:40:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.448 12:40:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.448 12:40:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:59.449 12:40:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:59.449 /tmp/:spdk-test:key0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:59.449 12:40:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:59.449 12:40:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:59.449 12:40:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:59.706 12:40:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:59.706 12:40:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:59.706 /tmp/:spdk-test:key1 00:28:59.706 12:40:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100426 00:28:59.706 12:40:28 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:59.706 12:40:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100426 00:28:59.706 12:40:28 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100426 ']' 00:28:59.706 12:40:28 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.706 12:40:28 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.706 12:40:28 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.706 12:40:28 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.706 12:40:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:59.706 [2024-07-12 12:40:28.628215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:59.706 [2024-07-12 12:40:28.628561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100426 ] 00:28:59.706 [2024-07-12 12:40:28.766349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.965 [2024-07-12 12:40:28.887705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.965 [2024-07-12 12:40:28.944284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:00.901 12:40:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:00.901 [2024-07-12 12:40:29.631938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.901 null0 00:29:00.901 [2024-07-12 12:40:29.663883] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:00.901 [2024-07-12 12:40:29.664129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.901 12:40:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:00.901 144528632 00:29:00.901 12:40:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:00.901 567227309 00:29:00.901 12:40:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100440 00:29:00.901 12:40:29 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:00.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.901 12:40:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100440 /var/tmp/bperf.sock 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100440 ']' 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.901 12:40:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:00.901 [2024-07-12 12:40:29.751179] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:00.901 [2024-07-12 12:40:29.751571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100440 ] 00:29:00.901 [2024-07-12 12:40:29.893864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.157 [2024-07-12 12:40:29.998104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.720 12:40:30 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.720 12:40:30 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:01.720 12:40:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:01.720 12:40:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:01.977 12:40:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:01.977 12:40:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:02.233 [2024-07-12 12:40:31.268229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:02.489 12:40:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:02.489 12:40:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:02.747 [2024-07-12 12:40:31.571852] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:02.747 nvme0n1 00:29:02.747 12:40:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:02.747 12:40:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:02.747 12:40:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:02.747 12:40:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:02.747 12:40:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:02.747 12:40:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.004 12:40:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:03.004 12:40:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:03.004 12:40:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:03.004 12:40:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:03.004 12:40:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.005 12:40:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.005 12:40:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@25 -- # sn=144528632 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 144528632 == \1\4\4\5\2\8\6\3\2 ]] 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 144528632 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:03.262 12:40:32 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.262 Running I/O for 1 seconds... 00:29:04.635 00:29:04.635 Latency(us) 00:29:04.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.636 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:04.636 nvme0n1 : 1.01 12949.99 50.59 0.00 0.00 9827.68 2949.12 12928.47 00:29:04.636 =================================================================================================================== 00:29:04.636 Total : 12949.99 50.59 0.00 0.00 9827.68 2949.12 12928.47 00:29:04.636 0 00:29:04.636 12:40:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:04.636 12:40:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:04.636 12:40:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:04.636 12:40:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:04.636 12:40:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:04.636 12:40:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:04.636 12:40:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:04.636 12:40:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.893 12:40:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:04.894 12:40:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:04.894 12:40:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:04.894 12:40:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.894 12:40:33 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:04.894 12:40:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:05.151 [2024-07-12 12:40:34.138457] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:05.151 [2024-07-12 12:40:34.139420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc51270 (107): Transport endpoint is not connected 00:29:05.151 [2024-07-12 12:40:34.140409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc51270 (9): Bad file descriptor 00:29:05.151 [2024-07-12 12:40:34.141406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:05.151 [2024-07-12 12:40:34.141425] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:05.151 [2024-07-12 12:40:34.141435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:05.151 request: 00:29:05.151 { 00:29:05.151 "name": "nvme0", 00:29:05.151 "trtype": "tcp", 00:29:05.151 "traddr": "127.0.0.1", 00:29:05.151 "adrfam": "ipv4", 00:29:05.151 "trsvcid": "4420", 00:29:05.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:05.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:05.151 "prchk_reftag": false, 00:29:05.151 "prchk_guard": false, 00:29:05.151 "hdgst": false, 00:29:05.151 "ddgst": false, 00:29:05.151 "psk": ":spdk-test:key1", 00:29:05.151 "method": "bdev_nvme_attach_controller", 00:29:05.151 "req_id": 1 00:29:05.151 } 00:29:05.151 Got JSON-RPC error response 00:29:05.151 response: 00:29:05.151 { 00:29:05.151 "code": -5, 00:29:05.151 "message": "Input/output error" 00:29:05.151 } 00:29:05.151 12:40:34 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:05.151 12:40:34 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:05.151 12:40:34 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:05.151 12:40:34 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@33 -- # sn=144528632 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 144528632 00:29:05.151 1 links removed 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:05.151 12:40:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:05.152 12:40:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:05.152 12:40:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:05.152 12:40:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:05.152 12:40:34 keyring_linux -- keyring/linux.sh@33 -- # sn=567227309 00:29:05.152 12:40:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 567227309 00:29:05.152 1 links removed 00:29:05.152 12:40:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100440 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100440 ']' 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100440 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100440 00:29:05.152 killing process with pid 100440 00:29:05.152 Received shutdown signal, test time was about 1.000000 seconds 00:29:05.152 00:29:05.152 Latency(us) 00:29:05.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.152 =================================================================================================================== 00:29:05.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100440' 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@967 -- # kill 100440 00:29:05.152 12:40:34 keyring_linux -- common/autotest_common.sh@972 -- # wait 100440 00:29:05.410 12:40:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100426 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100426 ']' 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100426 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100426 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:05.410 killing process with pid 100426 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100426' 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@967 -- # kill 100426 00:29:05.410 12:40:34 keyring_linux -- common/autotest_common.sh@972 -- # wait 100426 00:29:05.974 ************************************ 00:29:05.974 END TEST keyring_linux 00:29:05.974 ************************************ 00:29:05.974 00:29:05.974 real 0m6.434s 00:29:05.974 user 0m12.581s 00:29:05.974 sys 0m1.623s 00:29:05.974 12:40:34 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:05.974 12:40:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:05.974 12:40:34 -- common/autotest_common.sh@1142 -- # return 0 00:29:05.974 12:40:34 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:05.974 12:40:34 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:05.974 12:40:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:05.974 12:40:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:05.974 12:40:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:05.974 12:40:34 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:05.974 12:40:34 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:05.974 12:40:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:05.974 12:40:34 -- common/autotest_common.sh@10 -- # set +x 00:29:05.974 12:40:34 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:05.974 12:40:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:05.974 12:40:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:05.974 12:40:34 -- common/autotest_common.sh@10 -- # set +x 00:29:07.871 INFO: APP EXITING 00:29:07.871 INFO: killing all VMs 00:29:07.871 INFO: killing vhost app 00:29:07.871 INFO: EXIT DONE 00:29:08.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:08.129 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:08.129 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:09.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:09.061 Cleaning 00:29:09.061 Removing: /var/run/dpdk/spdk0/config 00:29:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:09.061 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:09.061 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:09.061 Removing: /var/run/dpdk/spdk1/config 00:29:09.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:09.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:09.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:09.061 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:09.061 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:09.061 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:09.061 Removing: /var/run/dpdk/spdk2/config 00:29:09.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:09.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:09.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:09.061 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:09.061 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:09.061 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:09.061 Removing: /var/run/dpdk/spdk3/config 00:29:09.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:09.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:09.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:09.061 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:09.061 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:09.061 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:09.061 Removing: /var/run/dpdk/spdk4/config 00:29:09.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:09.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:09.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:09.061 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:09.061 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:09.061 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:09.061 Removing: /dev/shm/nvmf_trace.0 00:29:09.061 Removing: /dev/shm/spdk_tgt_trace.pid70897 00:29:09.061 Removing: /var/run/dpdk/spdk0 00:29:09.061 Removing: /var/run/dpdk/spdk1 00:29:09.061 Removing: /var/run/dpdk/spdk2 00:29:09.061 Removing: /var/run/dpdk/spdk3 00:29:09.061 Removing: /var/run/dpdk/spdk4 00:29:09.061 Removing: /var/run/dpdk/spdk_pid100048 00:29:09.061 Removing: /var/run/dpdk/spdk_pid100065 00:29:09.061 Removing: /var/run/dpdk/spdk_pid100309 00:29:09.061 Removing: /var/run/dpdk/spdk_pid100426 00:29:09.061 Removing: /var/run/dpdk/spdk_pid100440 00:29:09.061 Removing: /var/run/dpdk/spdk_pid70752 00:29:09.061 Removing: /var/run/dpdk/spdk_pid70897 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71090 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71182 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71204 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71319 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71337 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71455 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71640 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71786 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71851 00:29:09.061 Removing: /var/run/dpdk/spdk_pid71921 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72012 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72089 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72122 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72158 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72219 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72297 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72735 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72776 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72827 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72843 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72912 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72928 00:29:09.061 Removing: /var/run/dpdk/spdk_pid72995 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73011 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73051 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73075 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73115 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73133 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73257 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73293 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73367 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73419 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73443 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73502 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73536 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73572 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73606 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73641 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73675 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73710 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73739 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73779 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73808 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73848 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73877 00:29:09.061 Removing: /var/run/dpdk/spdk_pid73917 00:29:09.320 Removing: /var/run/dpdk/spdk_pid73946 00:29:09.320 Removing: /var/run/dpdk/spdk_pid73985 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74015 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74050 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74087 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74126 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74161 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74197 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74261 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74354 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74662 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74674 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74711 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74724 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74740 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74759 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74778 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74793 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74812 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74826 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74847 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74866 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74879 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74895 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74914 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74933 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74943 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74962 00:29:09.320 Removing: /var/run/dpdk/spdk_pid74981 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75002 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75027 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75046 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75081 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75134 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75168 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75183 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75206 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75221 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75223 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75271 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75279 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75313 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75323 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75332 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75347 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75351 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75366 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75376 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75385 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75414 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75440 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75454 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75484 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75493 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75501 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75541 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75553 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75585 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75591 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75600 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75607 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75615 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75622 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75630 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75643 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75710 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75759 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75869 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75899 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75942 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75962 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75984 00:29:09.320 Removing: /var/run/dpdk/spdk_pid75999 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76030 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76051 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76121 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76137 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76186 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76251 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76307 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76337 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76422 00:29:09.320 Removing: /var/run/dpdk/spdk_pid76470 00:29:09.578 Removing: /var/run/dpdk/spdk_pid76497 00:29:09.578 Removing: /var/run/dpdk/spdk_pid76721 00:29:09.578 Removing: /var/run/dpdk/spdk_pid76814 00:29:09.578 Removing: /var/run/dpdk/spdk_pid76837 00:29:09.578 Removing: /var/run/dpdk/spdk_pid77156 00:29:09.578 Removing: /var/run/dpdk/spdk_pid77194 00:29:09.578 Removing: /var/run/dpdk/spdk_pid77488 00:29:09.578 Removing: /var/run/dpdk/spdk_pid77896 00:29:09.578 Removing: /var/run/dpdk/spdk_pid78175 00:29:09.578 Removing: /var/run/dpdk/spdk_pid78945 00:29:09.578 Removing: /var/run/dpdk/spdk_pid79768 00:29:09.578 Removing: /var/run/dpdk/spdk_pid79889 00:29:09.578 Removing: /var/run/dpdk/spdk_pid79958 00:29:09.578 Removing: /var/run/dpdk/spdk_pid81226 00:29:09.578 Removing: /var/run/dpdk/spdk_pid81432 00:29:09.578 Removing: /var/run/dpdk/spdk_pid84792 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85100 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85208 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85336 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85369 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85391 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85418 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85511 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85640 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85797 00:29:09.578 Removing: /var/run/dpdk/spdk_pid85878 00:29:09.578 Removing: /var/run/dpdk/spdk_pid86066 00:29:09.578 Removing: /var/run/dpdk/spdk_pid86149 00:29:09.578 Removing: /var/run/dpdk/spdk_pid86242 00:29:09.578 Removing: /var/run/dpdk/spdk_pid86548 00:29:09.578 Removing: /var/run/dpdk/spdk_pid86892 00:29:09.578 Removing: /var/run/dpdk/spdk_pid86894 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89077 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89079 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89355 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89369 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89383 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89419 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89425 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89503 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89515 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89619 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89621 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89729 00:29:09.578 Removing: /var/run/dpdk/spdk_pid89731 00:29:09.578 Removing: /var/run/dpdk/spdk_pid90120 00:29:09.578 Removing: /var/run/dpdk/spdk_pid90169 00:29:09.578 Removing: /var/run/dpdk/spdk_pid90272 00:29:09.578 Removing: /var/run/dpdk/spdk_pid90355 00:29:09.578 Removing: /var/run/dpdk/spdk_pid90660 00:29:09.578 Removing: /var/run/dpdk/spdk_pid90857 00:29:09.578 Removing: /var/run/dpdk/spdk_pid91232 00:29:09.578 Removing: /var/run/dpdk/spdk_pid91738 00:29:09.578 Removing: /var/run/dpdk/spdk_pid92556 00:29:09.578 Removing: /var/run/dpdk/spdk_pid93132 00:29:09.578 Removing: /var/run/dpdk/spdk_pid93144 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95044 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95099 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95159 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95215 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95336 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95397 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95459 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95514 00:29:09.578 Removing: /var/run/dpdk/spdk_pid95833 00:29:09.578 Removing: /var/run/dpdk/spdk_pid96983 00:29:09.578 Removing: /var/run/dpdk/spdk_pid97123 00:29:09.578 Removing: /var/run/dpdk/spdk_pid97362 00:29:09.578 Removing: /var/run/dpdk/spdk_pid97914 00:29:09.578 Removing: /var/run/dpdk/spdk_pid98069 00:29:09.578 Removing: /var/run/dpdk/spdk_pid98226 00:29:09.578 Removing: /var/run/dpdk/spdk_pid98322 00:29:09.578 Removing: /var/run/dpdk/spdk_pid98486 00:29:09.578 Removing: /var/run/dpdk/spdk_pid98595 00:29:09.578 Removing: /var/run/dpdk/spdk_pid99243 00:29:09.578 Removing: /var/run/dpdk/spdk_pid99278 00:29:09.578 Removing: /var/run/dpdk/spdk_pid99311 00:29:09.578 Removing: /var/run/dpdk/spdk_pid99561 00:29:09.578 Removing: /var/run/dpdk/spdk_pid99595 00:29:09.578 Removing: /var/run/dpdk/spdk_pid99626 00:29:09.578 Clean 00:29:09.836 12:40:38 -- common/autotest_common.sh@1451 -- # return 0 00:29:09.836 12:40:38 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:09.836 12:40:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:09.836 12:40:38 -- common/autotest_common.sh@10 -- # set +x 00:29:09.836 12:40:38 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:09.836 12:40:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:09.836 12:40:38 -- common/autotest_common.sh@10 -- # set +x 00:29:09.836 12:40:38 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:09.836 12:40:38 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:09.836 12:40:38 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:09.836 12:40:38 -- spdk/autotest.sh@391 -- # hash lcov 00:29:09.836 12:40:38 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:09.836 12:40:38 -- spdk/autotest.sh@393 -- # hostname 00:29:09.836 12:40:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:10.094 geninfo: WARNING: invalid characters removed from testname! 00:29:36.650 12:41:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:39.177 12:41:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:41.702 12:41:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:44.995 12:41:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:47.552 12:41:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:50.079 12:41:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:53.389 12:41:22 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:53.389 12:41:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:53.389 12:41:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:53.389 12:41:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.389 12:41:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.389 12:41:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.389 12:41:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.389 12:41:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.390 12:41:22 -- paths/export.sh@5 -- $ export PATH 00:29:53.390 12:41:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.390 12:41:22 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:53.390 12:41:22 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:53.390 12:41:22 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720788082.XXXXXX 00:29:53.390 12:41:22 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720788082.KKX3SL 00:29:53.390 12:41:22 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:53.390 12:41:22 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:29:53.390 12:41:22 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:29:53.390 12:41:22 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:29:53.390 12:41:22 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:53.390 12:41:22 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:53.390 12:41:22 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:53.390 12:41:22 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:53.390 12:41:22 -- common/autotest_common.sh@10 -- $ set +x 00:29:53.390 12:41:22 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:29:53.390 12:41:22 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:53.390 12:41:22 -- pm/common@17 -- $ local monitor 00:29:53.390 12:41:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:53.390 12:41:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:53.390 12:41:22 -- pm/common@25 -- $ sleep 1 00:29:53.390 12:41:22 -- pm/common@21 -- $ date +%s 00:29:53.390 12:41:22 -- pm/common@21 -- $ date +%s 00:29:53.390 12:41:22 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720788082 00:29:53.390 12:41:22 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720788082 00:29:53.390 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720788082_collect-vmstat.pm.log 00:29:53.390 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720788082_collect-cpu-load.pm.log 00:29:54.322 12:41:23 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:54.322 12:41:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:54.322 12:41:23 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:54.322 12:41:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:54.322 12:41:23 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:54.322 12:41:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:54.322 12:41:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:54.322 12:41:23 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:54.322 12:41:23 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:54.322 12:41:23 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:54.322 12:41:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:54.322 12:41:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:54.322 12:41:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:54.322 12:41:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:54.322 12:41:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:54.322 12:41:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:54.322 12:41:23 -- pm/common@44 -- $ pid=102199 00:29:54.322 12:41:23 -- pm/common@50 -- $ kill -TERM 102199 00:29:54.322 12:41:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:54.322 12:41:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:54.322 12:41:23 -- pm/common@44 -- $ pid=102200 00:29:54.322 12:41:23 -- pm/common@50 -- $ kill -TERM 102200 00:29:54.322 + [[ -n 5898 ]] 00:29:54.322 + sudo kill 5898 00:29:54.332 [Pipeline] } 00:29:54.350 [Pipeline] // timeout 00:29:54.355 [Pipeline] } 00:29:54.367 [Pipeline] // stage 00:29:54.372 [Pipeline] } 00:29:54.388 [Pipeline] // catchError 00:29:54.393 [Pipeline] stage 00:29:54.395 [Pipeline] { (Stop VM) 00:29:54.406 [Pipeline] sh 00:29:54.679 + vagrant halt 00:29:58.863 ==> default: Halting domain... 00:30:04.159 [Pipeline] sh 00:30:04.431 + vagrant destroy -f 00:30:07.705 ==> default: Removing domain... 00:30:07.974 [Pipeline] sh 00:30:08.250 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:08.262 [Pipeline] } 00:30:08.284 [Pipeline] // stage 00:30:08.291 [Pipeline] } 00:30:08.309 [Pipeline] // dir 00:30:08.315 [Pipeline] } 00:30:08.332 [Pipeline] // wrap 00:30:08.337 [Pipeline] } 00:30:08.353 [Pipeline] // catchError 00:30:08.362 [Pipeline] stage 00:30:08.364 [Pipeline] { (Epilogue) 00:30:08.377 [Pipeline] sh 00:30:08.650 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:15.210 [Pipeline] catchError 00:30:15.212 [Pipeline] { 00:30:15.225 [Pipeline] sh 00:30:15.503 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:15.761 Artifacts sizes are good 00:30:15.770 [Pipeline] } 00:30:15.786 [Pipeline] // catchError 00:30:15.797 [Pipeline] archiveArtifacts 00:30:15.803 Archiving artifacts 00:30:15.954 [Pipeline] cleanWs 00:30:15.964 [WS-CLEANUP] Deleting project workspace... 00:30:15.964 [WS-CLEANUP] Deferred wipeout is used... 00:30:15.969 [WS-CLEANUP] done 00:30:15.971 [Pipeline] } 00:30:15.986 [Pipeline] // stage 00:30:15.990 [Pipeline] } 00:30:16.004 [Pipeline] // node 00:30:16.010 [Pipeline] End of Pipeline 00:30:16.134 Finished: SUCCESS